NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS

Size: px
Start display at page:

Download "NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS"

Transcription

1 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS Abstract. Nonsmooth minimization problems arise in problems involving constraints or models describing certain discontinuities. The existence of solutions is established with classical arguments from the calculus of variations, iterative schemes for the practical computation of discretizations have to be adjusted to the particular features of the problem. Three abstract iterative methods are devised and there application to a model obstacle problem and a problem arising in image processing are discussed. 1. Minimization problems I.A. Constraints. Discretizing partial differential equations or evolution problems by numerical methods typically leads to solving finite-dimensional problems. Often, these are minimization problems or can be interpreted as optimality conditions for minimizing a given functional. We will thus consider the problem of finding a minimum for a function F : A R with a subset A R n. We will assume that the function is defined on all of R n which can be done by formally extending F by the value +, i.e., we identify F with its extension F : R n R = R {+ }, F R n \A = +. The subset A R n may represent the set of those elements in R n that satisfy a constraint, i.e., the admissible elements. Via extending the function by the value +, the constraint is enforced in the minimization problem, cf. Fig. 1 for an illustration. I.B. Existence. Necessary for the well-posedness of the minimization problem is that F is proper in the sense that there exist w 0 R n and c F,1 R such that F (w 0 ) c F,1, i.e., requiring that A. Typically, the function values F (w) become large or unbounded when w becomes large. We describe this by the coercivity condition w j = F (w j ) + Date: December 7, Lecture notes of a course given at the SAMM 2015 Materials with Discontinuities, University of Stuttgart, September

2 2 + F A v v 1 v v 2 Figure 1. Extending a function by the value + outside a closed admissible set (left); examples of lower semicontinuous (middle) and strictly convex functions (right). for every sequence (w j ) j N R n. This condition implies that if we have a sequence for which the function values are uniformly bounded, i.e., F (w j ) c 0 for all j N, then the sequence itself is bounded. We establish the existence of a sequence (w j ) j N with finite function values by assuming that F is bounded from below, i.e., there exists c F,2 R such for all w R n. This implies that F (w) c F,2 c F,1 inf w R n F (w) c F,2 and hence by definition of the infimum there exists an infimizing sequence (w j ) j N with lim F (w j) = inf F (v). j v R n By the coercivity condition the sequence (w j ) j N is bounded in R n. It hence has accumulation points. To show that these are minimizers for F, we assume that F is lower semicontinuous on a sufficiently large set, e.g., on the closure V of the sublevel set V = {w R n : F (w) F (w 0 ) c F,1 }. Lower semicontinuity of F restricted to V means that whenever we are given a sequence (v k ) k N V that converges to v V we have that F (v) lim inf k F (v k), cf. Fig. 1. If F is continuous on V then this holds with equality and the limit inferior can be replaced by the usual limit. Applying the lower semicontinuity to a convergent subsequence (w jk ) k N with limit w R n we have that F (w) lim inf k F (w j k ) = inf v R n F (v). Here we used that except for finitely many members, the infimizing sequence belongs to V and hence also w. We have thus established in a nonconstructive way the existence of a minimizer for F.

3 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 3 I.C. Uniqueness. Uniqueness of minimizers can be shown if V is convex and F is strictly convex in V, i.e., for all distinct v 1, v 2 V we have that v = (v 1 + v 2 )/2 V and F (v) < 1 2 F (v 1) F (v 2), i.e., the graph of F lies below the secant defined by v 1 and v 2, cf. Fig. 1. The existence of two minimizers w, w V leads to a contradiction since we then could construct an element with strictly lower function value. We have thus proved the following proposition. Proposition I.1 (Existence and uniqueness). Let F : R n R be proper, coercive, bounded from below, and lower semicontinuous on the closure V of the sublevel set of an admissible element. Then there exists a minimizer on R n for F which belongs to V. There exists a unique minimizer if F is in addition strictly convex on V. The same concepts apply to minimization problems on reflexive Banach spaces, i.e., for minimizing F : X R, cf., e.g., [ABM06]. In this case coercivity implies existence of weakly convergent subsequences of infimizing sequences and lower semicontiuity has to be formulated with respect to weak convergence, which is a stronger condition in general. 2. Elementary examples II.A. Nondifferentiability. We discuss a simple but prototypical nondifferentiable minimization problem that can be solved explicitly. For this we let w R n, γ > 0, and define F : R n R by F (w) = 1 2 w w 2 + γ w. The function F is proper, bounded from below by zero, coercive since γ w F (w), and continuous on R n. Moreover, it is strictly convex since for v 1, v 2 R n and v = (v 1 + v 2 )/2 we have by an application of the triangle inequality that γ v γ 2 v 1 + γ 2 v 2 and, using (a + b) 2 a 2 + b 2 for a, b 0, 1 2 v w 2 1 (1 2 2 v 1 w v 2 w ) v 1 w v 2 w 2. These estimates imply that F (v) 1 ( F (v1 ) + F (v 2 ) ) 2 with equality if and only if v 1 = v 2. To specify the unique minimizer w we distinguish two cases. Assume first that it satisfies w 0. Noting that F is differentiable in R n \ {0}, the optimality condition reads 0 = DF (w) = w w + γ w w.

4 4 The equivalent identity w = ( w + γ ) w w implies that w is parallel to the nonvanishing vector w. Taking the modulus of the identity shows that w = w + γ w = w γ. It thus follows that w = ( w γ ) w w. Assume next that w = 0. To verify for which vectors w this is the case, let v R n be arbitrary and consider for t 0 the function ϕ(t) = F (tv) = 1 2 tv w 2 + γt v. As ϕ is minimal for t = 0 we have for the right-sided derivative of ϕ at t = 0 that 0 ϕ (0) = w v + γ v. This is the case for all v R n if and only if w γ. We thus have that the solution of the minimization of F is given by {( ) w γ w w if w γ w = 0 if w γ = ( w γ ) w + w, where (x) + = max{x, 0} for x R. II.B. Nonlinear projection. The second example concerns a function that is finite only on a closed ball. To define this in a concise way, we use the indicator functional I A : R n R associated with a set A R n via { 0 if w A, I A (w) = + if w A. By K r (0) we denote the closed ball of radius r in R n. Let w R n, r > 0, and define F : R n R by F (w) = 1 2 w w 2 + I Kr(0)(w). The function F is proper, bounded from below by zero, coercive, and continuous on the set K r (0). Moreover, it is strictly convex on the set K r (0). The unique minimizer is the best-approximation of w in K r (0), i.e., { w if w r w = r w w if w r w = r max{r, w }.

5 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 5 3. Convex analysis III.A. Subgradients. The examples from the previous section can be understood with techniques from convex analysis. We let F : R n R be a proper, convex, and lower semicontinuous function. A generalization of derivatives is obtained via the subdifferential of F defined for w R n as the set F (w) R n consisting of all s R n such that s (v w) + F (w) F (v) for all v R n, i.e., the set of all slopes s R n that define a tangent plane to the graph of F in (w, F (w)). The elements in F (w) are called subgradients of F at w. A necessary condition for a minimizer w of F is that 0 F (w). Note that F (w) = if F (w) = + and that F (w) = {DF (w)} if F is differentiable at w. Moreover, if G is convex and differentiable then we have that, cf. [ET76], (F + G)(w) = F (w) + DG(w). The following examples are illustrated in Fig. 2. Examples III.1. (i) For the indicator functional I Kr(0) we have {0} if w < r, I Kr(0)(w) = {sw : s 0} if w = r, if w > r. (ii) For the function w γ w we have { γ w w if w 0, γ w = K γ (0) if w = 0. I Kr(0) γ w Figure 2. Typical subgradients of an indicator functional (left) and a degree-one homogeneous function (right).

6 6 III.B. Convex conjugate. In many minimization problems the concept of convex duality reveals important information about the problem. It is based on the convex conjugate F of a function F which is defined via F (s) = sup w R n w s F (w). The convex conjugate is a convex function that is lower semicontinuous. The arguments of F are slopes related to F at w. To see this, assume that F : R R is continously differentiable and F is strictly monotone. Then, we may regard s = F (w) as the independent variable and the function G(s) = w s F (w) satisfies G (s) = w. A geometric interpretation of F is shown in Fig. 3, cf. [ZRM09]. The quantity F (s) is the value of the tangent to F in w at 0. If F is not differentible then the minimal one of all such value is chosen. F F F (w) F (s) w F (w) F (s) s w F (s) Figure 3. Geometric construction of the convex conjugate for differentiable (left) and nondifferentiable functions (right). We have that F = F if and only if F is convex and lower semicontinuous. We always have the Fenchel inequality s w F (w) + F (s), where equality holds if and only if s F (w), cf. Fig. 3. We also have the equivalence s F (w) w F (s). Formally, this means that ( F ) 1 = F. Examples III.2. (i) We have that IK γ(0) (s) = γ s. (ii) For 1 < p, q < with 1/p + 1/q = 1 we have (1 p p) = (1 q q).

7 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 7 III.C. Proximity operators. For a convex, proper, lower semicontinuous function F : R n R which is bounded from below, we have that the mapping w 1 2 w w 2 + F (w) has a unique minimizer which we refer to by the proximity operator prox F : R n R n, i.e., 1 prox F ( w) = argmin w R n 2 w w 2 + F (w). The optimality condition for the minimizer reads 0 (w w) + F (w) w (I + F )(w), which motivates the notation prox F ( w) = (I + F ) 1 ( w). The proximity operator for a function F can be expressed in terms of its convex conjugate F. Remarks III.3. (i) Moreau s identity states that we have prox γf ( w) = w γprox F ( w/γ). (ii) Note that prox F can be equivalently defined via minimizing w 1 2 w 2 w w + F (w). Moreau s identity applies to indicator functionals and provides the proximity operator of the degree-one homogeneous function I K. Example III.4. For F = I K1 (0) we have F = and cf. Section 2. prox γ ( w) = w γprox ( w/γ) IK1 (0) { w/γ, w /γ 1, = w γ w/ w, w /γ 1, = ( w γ) + w/ w, III.D. Duality. We consider the minimization of the sum of two convex functions, i.e., the problem of minimizing J(w) = F (Bw) + G(w) on a Banach space X, e.g., X = R n, with a bounded linear operator B : X Y, for another Banach space Y, e.g., Y = R m. Noting that for proper, convex, lower semicontinuous functions we have F = F, we may replace

8 8 F (Bw) = F (Bw) by a supremum to derive the identity [ ] inf J(w) = inf F (Bw) + G(w) w w = inf sup [ (Bw) s F (s) + G(w) ] w s [ F (s) + (Bw) s + G(w) ] sup inf s w [ = sup F ( (s) + inf w (B s) + G(w) )] s w [ = sup F ( (s) sup w ( B s) G(w) )] s w [ = sup F (s) G ( B s) ]. s In this derivation we used that exchanging the order of the infimum and the supremum results in an inequality, we wrote the infimum as the negative of a supremum, and we used the adjoint or transpose B of the operator B. If, e.g., G is continuous then the inequality is an equality, i.e., we have strong duality in the sense that inf w [ F (Bw) + G(w) ] = sup s [ F (s) G ( B s) ], cf. [ET76] for details. We have expressed the minimization problem as a maximization problem of the functional K(s) = F (s) G ( B s), which is called the dual problem associated with the primal problem defined by the minimization of J. A geometric explanation of the duality relation with B = I is provided in Fig. 4. The primal problem seeks a point w that minimizes the vertical distance between the graphs of F and G. The dual problem seeks a slope s such that the distance between parallel tangents with slopes s to F and G is maximal. From the definitions of F and G we find that the optimal values s and w are related via the inclusions Bw F (s), B s G(w), which are in fact optimality conditions for the saddle-point problem occurring in the derivation of the dual problem. F F (w) + G(w) F (s) G (s) G Figure 4. Geometric interpretation of the duality relation. The minimal vertical distance between F and G equals the maximal distance between parallel tangents to F and G.

9 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 9 4. Iterative solution methods IV.A. Splitting method. A general approach to the practical minimization of the functional J(w) = F (Bw) + G(w) is based on introducing the new variable y = Bw and imposing this equality via a Lagrange multiplier and a regularizing term, i.e., considering the augmented Lagrange functional L τ (w, y; λ) = F (y) + G(w) + λ (Bw y) + τ 2 Bw y 2 Y. Here we let τ > 0 be a parameter and assume that Y is a Hilbert space. Minimizing J is now equivalent to finding a saddle point for L τ, i.e., inf J(w) = inf sup L τ (w, y; λ). w X (w,y) X Y λ Y To determine a saddle point, we alternatingly minimize with respect to w and y and carry out an ascent step with respect to λ. Algorithm IV.1 (Splitting method). Choose y 1, λ 0 Y and τ > 0 and set j = 0. (1) Compute the minimizer w j X for the mapping w L τ (w, y j 1 ; λ j ). (2) Compute the minimizer y j Y for the mapping y L τ (w j, y; λ j ). (3) Compute the maximizer λ j+1 Y for the mapping λ 1 2τ λ λj 2 Y + L τ (w j, y j, λ), i.e., set λ j+1 = λ j + τ(bw j y j ). (4) Stop if y j y j 1 Y + Bw j y j Y ε stop ; increase j j + 1 and continue with (1) otherwise. The algorithm always terminates; global convergence holds if F or G is strongly convex, cf. [FG83]. The important feature of the algorithm is that the minimization of F and G is decoupled. For certain simple functionals such as degree-one homogeneous functions or indicator functionals as discussed in Section 2, the minimization with respect to w or y can thus be computed explicitly, i.e., they are obtained via evaluating proximity operators. In the maximization with respect to λ the additional term 1 2τ λ λj 2 Y corresponds to an ascent or gradient method. Including this quadratic term is crucial for the convergence analysis. We remark that the splitting algorithm can also be used to solve the dual problem via minimizing K.

10 10 IV.B. Primal-dual method. Another approach to solving the convex minimization problem defined by the functional J is based on considering the saddle-point problem inf sup F (s) + Bw s + G(w) w s which occurs in the passage from the primal to the dual formulation. The following algorithm alternatingly carries out descent and ascent steps with respect to w and s, respectively. This corresponds to evaluating proximity operators related to F and G. To guarantee convergence under minimal conditions on the involved step size τ > 0, the approximations of the variable w are extrapolated in the first step of the algorithm. Algorithm IV.2 (Primal-dual method). Choose w 1 X and set d t w 1 = 0 and j = 0. (1) Define the extrapolated value w j = w j 1 + d t w j 1. (2) Compute a maximizer s j Y for the mapping s 1 2τ s sj 1 2 Y + B w j s F (s). (3) Compute a minimizer w j X for the mapping w 1 2τ w wj 1 2 X + Bw s j + G(w). (4) Stop if w j w j 1 X ε stop ; set d t w j = (w j w j 1 )/τ, increase j j + 1, and continue with (1) otherwise. Global convergence of the iteration can be proved if the step size τ is chosen sufficiently small so that τ B L(X,Y ) < 1, cf., e.g., [Bar15b]. IV.C. Semismooth Newton method. In some situations the optimality conditions related to a nonsmooth minimization problem can be reformulated as a nonlinear, nondifferentiable system of equations. A generalized notion of derivative allows us to formulate and analyze a generalized Newton scheme. Definition IV.3. The mapping N : X X is called Newton differentiable at x X if there exist ε > 0 and a function δn : B ε (x) L(X, X) such that N(x + z) N(x) δn(x + z)[z] X lim = 0. z 0 z X The function δn is called Newton derivative of N at x. Note that in contrast to the definition of the classical derivative, the function δn is evaluated at the perturbed point x + z. This is exactly the quantity that arises in the analysis of the classical Newton scheme.

11 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 11 Example IV.4. The function N : R R, x max{x, 0}, is Newton differentiable with Newton derivative δn(x) = 0 for x < 0, δn(0) = s for arbitrary s [0, 1], and δn(x) = 1 for x > 0. Assuming that N is Newton differentiable with invertible Newton derivative, we may formulate the following scheme. Algorithm IV.5 (Newton method). Let x 0 X and set j = 0. (1) Compute d j+1 X such that δn(x j )[d j+1 ] = N(x j ). (2) Stop if d j+1 X ε stop ; set x j+1 = x j + d j+1, increase j j + 1, and continue with (1) otherwise. The Newton method is locally superlinearly convergent if N(x) = 0 and δn is boundedly invertible in a neighbourhood of x. Note that the method coincides with the classical Newton scheme if N is differentiable. 5. Obstacle problem V.A. Mathematical model. The obstacle problem models the constrained deflection u : Ω R of an elastic membrane occupying the domain Ω R 2 subject to a vertical force f : Ω R. The constraint is determined by a function χ : Ω R that models an obstacle, cf. Fig. 5. f u χ Figure 5. Schematical description of the obstacle problem. The mathematical model assumes u 1 and leads to the following minimization problem, cf., e.g., [Rod87]: Minimize I(u) = 1 u 2 dx fu dx 2 Ω Ω subject to u χ in Ω, u = 0 on Ω. With the convex set K = {v H 1 0 (Ω) : v χ in Ω} the minimization problem can be recast as minimizing J(u) = I(u) + I K (u). Since I and I K are weakly lower semicontinuous on H0 1 (Ω), and since I is strictly convex, it follows that there exists a unique solution u K. It satisfies the optimality condition 0 DI(u) + I K (u),

12 12 i.e., there exists a Lagrange multiplier λ I K (u) such that 0 = u f + λ. We have that λ 0 in Ω and λ = 0 in the complement of the coincidence set {x Ω : u(x) = χ(x)}. Remark V.1. The solution u K of the obstacle problem is equivalently characterized by the variational inequality ( u, (v u) ) ( f, v u ) for all v K with the L 2 scalar product (, ). This is obtained, e.g., by computing the right-sided derivative at 0 of the mapping t J(u+t(v u)). V.B. Complementarity. The properties of the Lagrange multiplier lead to the complementarity principle λ(u χ) = 0, λ 0, u χ 0. Given an arbitrary number c > 0 these relations are satisfied if and only if the complementarity function C(u, λ) = λ min{0, λ + c(u χ)} vanishes in Ω. The solution of the obstacle problem is thus equivalent to determining a pair (u, λ) such that u f + λ = 0, C(u, λ) = 0. V.C. Discretization. We approximate the obstacle problem with a finite element method, i.e., we look for a function u h in the finite-dimensional subspace S 1 0 (T h) of H 1 0 (Ω) defined for a partition T h of Ω into triangles or tetrahedra via S 1 (T h ) = {v h C(Ω) : v h T affine for all T T h } and S0(T 1 h ) = S 1 (T h ) H0 1 (Ω). There exists a nodal basis (ϕ z : z N h ) associated with the set N h of vertices of triangles or tetrahedra such that v h = v h (z)ϕ z z N h for every v h S 1 (T h ). Choosing an approximation χ h S 1 (T h ) of the obstacle χ, the discretized obstacle problem reads: Minimize I(u h ) = 1 u h 2 dx fu h dx 2 Ω Ω subject to u h χ h in Ω, u h = 0 on Ω. If Ω is convex and χ H 2 (Ω) then we have u H 2 (Ω) and the quasi-optimal error estimate (u u h ) L 2 (Ω) ch D 2 u L 2 (Ω),

13 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 13 cf. [Cia78, GLT81], where h > 0 is the maximal mesh-size of the triangulation T h. Note that in general we have u H 3 (Ω) even if χ C (Ω), cf. [KS80]. V.D. Semismooth Newton scheme. Arguing as in the continuous situation the discrete obstacle problem is equivalent to solving the system of equations AU + Λ = B, U Z, Λ 0, Λ i (U Z) i = 0, i = 1, 2,..., L. Here, A is the finite element stiffness matrix, U, Λ, Z R L are vectors that contain the nodal values of u h, λ h, χ h and B is an approximation of the right-hand side functional defined by f. The equations are equivalent to AU + Λ = B, C(U, Λ) = Λ min{0, Λ + c(u Z)} = 0, where the minimum is taken componentwise. This can equivalently be written as N(U, Λ) = 0, where [ ] [ ] N1 (U, Λ) AU + Λ B N(U, Λ) = =. N 2 (U, Λ) C(U, Λ) For a set A {1, 2,..., L} we define I A R L L for 1 i, j L by { 1 if i = j and i A, (I A ) ij = 0 otherwise. We denote A c = {1, 2,..., L}\A and note that I A +I A c = I L is the identity matrix in R L L. Proposition V.2. The function N is Newton-differentiable at every (U, Λ) R L R L and with the set we have A = {1 i L : Λ i + c(u i Z i ) < 0} δn(u, Λ) = [ ] δn1 (U, Λ) = δn 2 (U, Λ) [ A IL ci A I A c In particular, δn(u, Λ) is regular and the semismooth Newton scheme for the iterative solution of F (U, Λ) = 0 is well-defined and locally superlinearly convergent. One step of the semismooth Newton scheme, i.e., the linear system of equations δn(ũ, Λ)[δU, δλ] = N(Ũ, Λ) for a given iterate (Ũ, Λ), is equivalent to the identity [ ] [ ] [ A I L δu AŨ = + Λ ] B ciã IÃc δλ Λ min{0, Λ + c(ũ Z)}, where à = {1 i L : Λ i + c(ũi Z i ) < 0}. This system can be written as A(Ũ + δu) + ( Λ + δλ) = B, ( Λ + δλ)ãc = 0, (Ũ + δu) à = Z Ã. ].

14 14 The semismooth Newton scheme can thus be formulated in the following form which is a version of a primal-dual active set method, cf. [HIK02] for details. Algorithm V.3 (Primal-dual active set method). Let (U 0, Λ 0 ) R L R L and c > 0 and set j = 0. (1) Compute (U j+1, Λ j+1 ) via defining and solving A j = {1 i L : Λ j i + c(u j i Z i) < 0} [ ] [ ] [ ] A IL U j+1 B I Aj I A c j Λ j+1 =. I Aj Z (2) Stop the iteration if (u j+1 h u j h ) L 2 (Ω) ε stop ; increase j j + 1 and continue with (1) otherwise. Remarks V.4. (i) The unknowns related to the active set A j can be eliminated from the linear system of equations. (ii) Global convergence of the algorithm and monotonicity U j+1 U j Z for j 2 can be proved if A is an M-matrix. (iii) Classical active set strategies define A j = {1 i L : U j i Zj i } which corresponds to a formal limit c. V.E. Augmented Lagrange method. We introduce the auxiliary variable y = u and extend the functional J by adding a term that enforces this identity, i.e., L τ (u, y; µ) = 1 u 2 dx fu dx + I K (y) + (µ, u y) + τ 2 Ω Ω 2 u y 2 L 2 (Ω). The obstacle problem is thus equivalent to determining a saddle point (u, y; µ) for L τ, i.e., solving inf sup L τ (u, y; µ). (u,y) µ The splitting method of Algorithm IV.1 can be applied to this formulation. In this way, the constraint u K included via the indicator functional of K is decoupled from the Dirichlet energy. This is of practical interest since both functionals themselves can be treated efficiently. The splitting approach thus leads to a globally convergent iterative method that is straightforward to realize. In general, it is slower than the semismooth Newton method. If the conditions for global convergence of the Newton method are not satisfied, then the splitting method can be used to determine a good starting value. V.F. Numerical experiment. Figure 6 shows two finite element approximations of an obstacle problem on Ω = (0, 1) 2 with f(x) = 5 and χ(x) = 1/4 for all x Ω. For a better visualization of the contact set we displayed the negative of the numerical solutions. The solutions were obtained with the semismooth Newton scheme with stopping criterion defined by ε stop = The required iteration numbers are shown in Table 1. For

15 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 15 a comparison we also solved the obstacle problem with an augmented Lagrange method, where we always used the step size τ = Figure 6. Numerical solutions u h in an obstacle problem on two different triangulations. Mesh size h Semismooth Newton Augmented Lagrange Table 1. Iteration numbers for the semismooth Newton and the augmented Lagrange method in a two-dimensional obstacle problem. 6. Total-variation minimization VI.A. Mathematical model. Given a noisy gray-scale image g : Ω R we aim at removing the noise by solving a convex minimization problem. While high-frequency contributions should be eliminated, edges and other important features should be preserved. Classical Sobolev spaces are therefore too restrictive to provide meaningful image regularizations. The space of functions of bounded variation is an extension of the space W 1,1 (Ω) and contains piecewise weakly differentiable functions which may be discontinuous across certain lower-dimensional interfaces. A seminorm in BV (Ω) is given by the total variation of the distributional gradient of a function u L 1 (Ω), i.e., { } Du (Ω) = u div p dx : p Cc (Ω; R d ), p L (Ω) 1. Ω If u L 2 (Ω) then the vector fields p can be taken from the larger space H N (div; Ω) of square integrable vector fields whose distributional divergence belongs to L 2 (Ω) and which have a vanishing normal component on Ω. If u = χ A is the characteristic function of a sufficiently regular set A Ω then Du is the perimeter of A which coincides with the length or surface measure of A. The Rudin Osher Fatemi model [ROF92, CL97] for image denoising seeks a solution u BV (Ω) L 2 (Ω) for the minimization problem: Minimize I(u) = Du (Ω) + α 2 u g 2 L 2 (Ω).

16 16 Here, α > 0 is a given parameter. The problem is illustrated in Fig. 7. g u Figure 7. Illustration of the image denoising problem. Existence of minimizers follows from appropriate weak compactness properties of the space BV (Ω), cf. [ABM06]. Since the functional I is strongly convex, the distance of any function to the minimizer is controlled by the difference of the values of the functional. Lemma VI.1. If u BV (Ω) L 2 (Ω) minimizes I then we have α 2 u v 2 L 2 (Ω) I(v) I(u) for every v BV (Ω) L 2 (Ω). In particular, minimizers are uniquely defined. Proof. We define F : BV (Ω) R and G : L 2 (Ω) R by F (u) = Du (Ω), G(u) = α 2 u g 2 L 2 (Ω) and extend F by + to L 2 (Ω). Then F is convex and G is strongly convex and Fréchet differentiable with DG(u)[w] = α(u g, w), i.e., we have δg(u)[v u] + α 2 u v 2 L 2 (Ω) + G(u) = G(v) for all u, v L 2 (Ω). Since u BV (Ω) L 2 (Ω) is a minimizer, we have 0 I(u) = F (u) + DG(u), or equivalently DG(u) F (u), i.e., DG(u)[v u] + F (u) F (v). The strong convexity of G yields that α 2 u v 2 L 2 (Ω) + G(u) G(v) + F (u) F (v), which proves the assertion. VI.B. Discretization. To approximate minimizers of I numerically, we note that continuous, piecewise affine functions belong to BV (Ω) and restrict the minimization to S 1 (T h ) W 1,1 (Ω), i.e., we aim at solving: Minimize I(u h ) = Du h (Ω) + α 2 u h g 2 L 2 (Ω) in the set of functions u h S 1 (T h ). The existence of a unique solution is a consequence of the continuity and strong convexity of I. Lemma VI.1 implies an error estimate in the L 2 (Ω) norm, i.e., we have, cf. [Bar12, BNS14], α 2 u u h 2 L 2 (Ω) ch1/2.

17 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS 17 This estimate does not assume additional regularity properties of the exact solution u BV (Ω) L 2 (Ω) but uses the fact that we have u L (Ω). Remarks VI.2. (i) Since for u BV (Ω) L 2 (Ω) the best approximation in S 1 (T h ) satisfies inf vh S 1 (T h ) u v h L 2 (Ω) h 1/2, the convergence rate of the theorem is suboptimal. Numerical experiments indicate that the optimal convergence rate O(h 1/2 ) in L 2 (Ω) is in general not attained. (ii) The discretization with piecewise constant functions on a sequence of triangulations does in general not lead to a convergent numerical method. VI.C. Primal-dual method. To develop an iterative solution method for solving the nondifferentiable minimization problem defined by I, we derive a related saddle-point formulation. For this we define the set of elementwise constant vector fields by L 0 (T h ) d = { q h L 1 (Ω; R d ) : q h T constant for every T T h } and note that due to the fact that u h L 0 (T h ) d for u h S 1 (T h ) we have inf u h dx + α u h S 1 (T h ) Ω 2 u h g 2 L 2 (Ω) = inf sup p h u h dx + α 2 u h g 2 L 2 (Ω) I K 1 (0)(p h ) u h S 1 (T h ) p h L 0 (T h ) d = inf u h S 1 (T h ) Ω sup L h (u h, p h ). p h L 0 (T h ) d Here, I K1 (0) is the indicator functional of the set K 1 (0) = {p L 1 (Ω; R d ) : p 1 a.e. in Ω}. A saddle point (u h, p h ) satisfies p h 1 in Ω and the equations (p h, v h ) = α(u h g, v h ), ( u h, q h p h ) 0 for all (v h, q h ) S 1 (T h ) L 0 (T h ) d with q h 1 in Ω. With an appropriate step size τ > 0 and a discrete inner product (, ) h,s on S 1 (T h ) that may differ from the L 2 inner product, we obtain the following variant of the primal-dual iteration. Algorithm VI.3 (Primal-dual iteration). Let (, ) h,s be an inner product on S 1 (T h ), τ > 0, (u 0 h, p0 h ) S1 (T h ) L 0 (T h ) d, set d t u 0 h = 0 and j = 0. (1) Set ũ j h = uj 1 h + τd t u j 1 h. (2) Compute the maximizer p j h L0 (T h ) d for the mapping p h 1 2τ p h p j 1 h 2 L 2 (Ω) + p h ũ j h dx I K 1 (0)(p h ). (3) Compute the minimizer u j h S1 (T h ) for the mapping u h 1 2τ u h u j 1 h 2 h,s + p j h u h dx + α 2 u h g 2 L 2 (Ω). Ω (4) Stop if u j 1 h u j h h,s ε stop ; set d t u j h = (uj h uj 1 h )/τ, increase j j + 1, and continue with (1) otherwise. Ω

18 18 Remark VI.4. Note that p j h in Step (2) is given by the truncation operation p j h = ( p j 1 h + τ ũ j ) h / max{1, p j 1 h + τ ũ j h } which can be computed elementwise. The iterates of the primal-dual algorithm converge to a stationary point if τ is sufficiently small, e.g., τ = O(h) if (, ) h,s is the inner product in L 2 (Ω), cf. [CP11]. Other useful choices of the inner product (, ) h,s are weighted combinations of the inner product in L 2 (Ω) and the semi-inner product in H 1 (Ω), e.g., (v h, w h ) h,s = (v h, w h ) + h (1 s)/s ( v h, w h ), for s = 1/2 this is a discrete version of the inner product in the broken Sobolev space H 1/2 (Ω). It provides a good choice since this norm remains bounded for bounded sequences in W 1,1 (Ω) L (Ω), cf. [Bar15b, Bar15a] for details. VI.D. Splitting method. Splitting methods are attractive for minimization problems involving total variation since these are directly obtained from introducing the variable r h = u h and considering the augmented Lagrange functional L τ,h (u h, r h ; µ h ) = r h dx+ α 2 u h g 2 L 2 (Ω) +(µ h, u h r h ) h + τ 2 u h r h 2 h. Ω Note that the minimization with respect to r h can be done explicitly elementwise if h is a multiple of the L 2 norm. Often, the norm and inner product in L 2 (Ω) are chosen to define L τ,h. This can in general not be expected to lead to a uniformly stable iterative scheme since typically we have u L 2 (Ω; R d ) and u h L 2 (Ω) h d/2 as h 0. Instead one may use a weighted inner product and corresponding norm, e.g., defined by (r h, s h ) h = h d r h s h dx The weighting factor h d compensates the growth of the L 2 norm of u h as h 0. This has the interpretation of a softened penalty method. VI.E. Numerical experiment. We set Ω = (0, 1) 2 and let g = χ Br(m) be the characteristic function of the ball of radius r = 1/5 around the midpoint m = (1/2, 1/2). The regularization parameter is chosen as α = 100. Figure 8 displays numerical solutions on two different triangulations. These were obtained with the primal-dual method for s = 1/2, step size τ = h 1/2 /10, and stopping criterion ε stop = Table 2 shows the corresponding iteration numbers on different triangulations together with those needed in the splitting method. We used the weighted L 2 norm to define the penalty term and always set τ = 10. Ω

19 NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS Figure 8. Numerical solutions u h for a total-variation regularized problem on two different triangulations. Mesh size h Primal-dual Augmented Lagrange Table 2. Iteration numbers for the primal-dual and the augmented Lagrange method in a two-dimensional totalvariation regularized problem. References [ABM06] Hedy Attouch, Giuseppe Buttazzo, and Gérard Michaille, Variational analysis in Sobolev and BV spaces, MPS/SIAM Series on Optimization, vol. 6, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, [Bar12] Sören Bartels, Total variation minimization with finite elements: convergence and iterative solution, SIAM J. Numer. Anal. 50 (2012), no. 3, [Bar15a] Sören Bartels, Broken Sobolev space iteration for total variation regularized minimization problems, IMA Journal of Numerical Analysis (2015). [Bar15b] Sören Bartels, Numerical methods for nonlinear partial differential equations, Springer Series in Computational Mathematics, vol. 47, Springer, Cham, [BNS14] Sören Bartels, Ricardo H. Nochetto, and Abner J. Salgado, Discrete total variation flows without regularization, SIAM J. Numer. Anal. 52 (2014), no. 1, [Cia78] Philippe G. Ciarlet, The finite element method for elliptic problems, North- Holland Publishing Co., Amsterdam-New York-Oxford, 1978, Studies in Mathematics and its Applications, Vol. 4. [CL97] Antonin Chambolle and Pierre-Louis Lions, Image recovery via total variation minimization and related problems, Numer. Math. 76 (1997), no. 2, [CP11] Antonin Chambolle and Thomas Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision 40 (2011), no. 1, [ET76] Ivar Ekeland and Roger Temam, Convex analysis and variational problems, North-Holland Publishing Co., Amsterdam-Oxford; American Elsevier Publishing Co., Inc., New York, [FG83] Michel Fortin and Roland Glowinski, Augmented Lagrangian methods, Studies in Mathematics and its Applications, vol. 15, North-Holland Publishing Co., Amsterdam, 1983.

20 20 [GLT81] [HIK02] [KS80] [Rod87] [ROF92] Roland Glowinski, Jacques-Louis Lions, and Raymond Trémolières, Numerical analysis of variational inequalities, Studies in Mathematics and its Applications, vol. 8, North-Holland Publishing Co., Amsterdam-New York, Michael Hintermüller, Kazufumi Ito, and Karl Kunisch, The primal-dual active set strategy as a semismooth Newton method, SIAM J. Optim. 13 (2002), no. 3, (2003). David Kinderlehrer and Guido Stampacchia, An introduction to variational inequalities and their applications, Pure and Applied Mathematics, vol. 88, Academic Press, Inc., New York-London, José-Francisco Rodrigues, Obstacle problems in mathematical physics, North- Holland Mathematics Studies, vol. 134, North-Holland Publishing Co., Amsterdam, 1987, Notas de Matemática [Mathematical Notes], 114. Leonid I. Rudin, Stanley Osher, and Emad Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena 60 (1992), no. 14, [ZRM09] Royce K. P. Zia, Edward F. Redish, and Susan R. McKay, Making sense of the legendre transform, American Journal of Physics 77 (2009), no. 7,

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Numerical Solution of Nonsmooth Problems and Application to Damage Evolution and Optimal Insulation

Numerical Solution of Nonsmooth Problems and Application to Damage Evolution and Optimal Insulation Numerical Solution of Nonsmooth Problems and Application to Damage Evolution and Optimal Insulation Sören Bartels University of Freiburg, Germany Joint work with G. Buttazzo (U Pisa), L. Diening (U Bielefeld),

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

A duality-based approach to elliptic control problems in non-reflexive Banach spaces

A duality-based approach to elliptic control problems in non-reflexive Banach spaces A duality-based approach to elliptic control problems in non-reflexive Banach spaces Christian Clason Karl Kunisch June 3, 29 Convex duality is a powerful framework for solving non-smooth optimal control

More information

ALTERNATING DIRECTION METHOD OF MULTIPLIERS WITH VARIABLE STEP SIZES

ALTERNATING DIRECTION METHOD OF MULTIPLIERS WITH VARIABLE STEP SIZES ALTERNATING DIRECTION METHOD OF MULTIPLIERS WITH VARIABLE STEP SIZES Abstract. The alternating direction method of multipliers (ADMM) is a flexible method to solve a large class of convex minimization

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane Conference ADGO 2013 October 16, 2013 Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions Marc Lassonde Université des Antilles et de la Guyane Playa Blanca, Tongoy, Chile SUBDIFFERENTIAL

More information

A convergence result for an Outer Approximation Scheme

A convergence result for an Outer Approximation Scheme A convergence result for an Outer Approximation Scheme R. S. Burachik Engenharia de Sistemas e Computação, COPPE-UFRJ, CP 68511, Rio de Janeiro, RJ, CEP 21941-972, Brazil regi@cos.ufrj.br J. O. Lopes Departamento

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Calculus of Variations. Final Examination

Calculus of Variations. Final Examination Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of

More information

Convergence rates of convex variational regularization

Convergence rates of convex variational regularization INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 20 (2004) 1411 1421 INVERSE PROBLEMS PII: S0266-5611(04)77358-X Convergence rates of convex variational regularization Martin Burger 1 and Stanley Osher

More information

On Gap Functions for Equilibrium Problems via Fenchel Duality

On Gap Functions for Equilibrium Problems via Fenchel Duality On Gap Functions for Equilibrium Problems via Fenchel Duality Lkhamsuren Altangerel 1 Radu Ioan Boţ 2 Gert Wanka 3 Abstract: In this paper we deal with the construction of gap functions for equilibrium

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems

Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Chapter 1 Foundations of Elliptic Boundary Value Problems 1.1 Euler equations of variational problems Elliptic boundary value problems often occur as the Euler equations of variational problems the latter

More information

A posteriori error control for the binary Mumford Shah model

A posteriori error control for the binary Mumford Shah model A posteriori error control for the binary Mumford Shah model Benjamin Berkels 1, Alexander Effland 2, Martin Rumpf 2 1 AICES Graduate School, RWTH Aachen University, Germany 2 Institute for Numerical Simulation,

More information

Algorithms for Nonsmooth Optimization

Algorithms for Nonsmooth Optimization Algorithms for Nonsmooth Optimization Frank E. Curtis, Lehigh University presented at Center for Optimization and Statistical Learning, Northwestern University 2 March 2018 Algorithms for Nonsmooth Optimization

More information

Existence of minimizers for the pure displacement problem in nonlinear elasticity

Existence of minimizers for the pure displacement problem in nonlinear elasticity Existence of minimizers for the pure displacement problem in nonlinear elasticity Cristinel Mardare Université Pierre et Marie Curie - Paris 6, Laboratoire Jacques-Louis Lions, Paris, F-75005 France Abstract

More information

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space Math Tune-Up Louisiana State University August, 2008 Lectures on Partial Differential Equations and Hilbert Space 1. A linear partial differential equation of physics We begin by considering the simplest

More information

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS MATHEMATICS OF OPERATIONS RESEARCH Vol. 28, No. 4, November 2003, pp. 677 692 Printed in U.S.A. ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS ALEXANDER SHAPIRO We discuss in this paper a class of nonsmooth

More information

ANALYSIS IN SOBOLEV AND BV SPACES

ANALYSIS IN SOBOLEV AND BV SPACES ж ш Л Fi I /V T I Jf"\ Ik 1 Л 1 ANALYSIS IN SOBOLEV AND BV SPACES APPLICATIONS TO PDES AND OPTIMIZATION Hedy Attouch Universite Montpellier I! Montpellier, France Giuseppe Buttazzo Universitä di Pisa Pisa,

More information

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen,

More information

Weak Convergence Methods for Energy Minimization

Weak Convergence Methods for Energy Minimization Weak Convergence Methods for Energy Minimization Bo Li Department of Mathematics University of California, San Diego E-mail: bli@math.ucsd.edu June 3, 2007 Introduction This compact set of notes present

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS

SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS SUPERCONVERGENCE PROPERTIES FOR OPTIMAL CONTROL PROBLEMS DISCRETIZED BY PIECEWISE LINEAR AND DISCONTINUOUS FUNCTIONS A. RÖSCH AND R. SIMON Abstract. An optimal control problem for an elliptic equation

More information

Regularization of linear inverse problems with total generalized variation

Regularization of linear inverse problems with total generalized variation Regularization of linear inverse problems with total generalized variation Kristian Bredies Martin Holler January 27, 2014 Abstract The regularization properties of the total generalized variation (TGV)

More information

ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A.

ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE INTO CARTOON PLUS TEXTURE. C.M. Elliott and S.A. COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 6, Number 4, December 27 pp. 917 936 ANALYSIS OF THE TV REGULARIZATION AND H 1 FIDELITY MODEL FOR DECOMPOSING AN IMAGE

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Maximal monotone operators are selfdual vector fields and vice-versa

Maximal monotone operators are selfdual vector fields and vice-versa Maximal monotone operators are selfdual vector fields and vice-versa Nassif Ghoussoub Department of Mathematics, University of British Columbia, Vancouver BC Canada V6T 1Z2 nassif@math.ubc.ca February

More information

Convergence rate estimates for the gradient differential inclusion

Convergence rate estimates for the gradient differential inclusion Convergence rate estimates for the gradient differential inclusion Osman Güler November 23 Abstract Let f : H R { } be a proper, lower semi continuous, convex function in a Hilbert space H. The gradient

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

1.The anisotropic plate model

1.The anisotropic plate model Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper

More information

Finite Elements. Colin Cotter. February 22, Colin Cotter FEM

Finite Elements. Colin Cotter. February 22, Colin Cotter FEM Finite Elements February 22, 2019 In the previous sections, we introduced the concept of finite element spaces, which contain certain functions defined on a domain. Finite element spaces are examples of

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

Lecture Note III: Least-Squares Method

Lecture Note III: Least-Squares Method Lecture Note III: Least-Squares Method Zhiqiang Cai October 4, 004 In this chapter, we shall present least-squares methods for second-order scalar partial differential equations, elastic equations of solids,

More information

MEASURE VALUED DIRECTIONAL SPARSITY FOR PARABOLIC OPTIMAL CONTROL PROBLEMS

MEASURE VALUED DIRECTIONAL SPARSITY FOR PARABOLIC OPTIMAL CONTROL PROBLEMS MEASURE VALUED DIRECTIONAL SPARSITY FOR PARABOLIC OPTIMAL CONTROL PROBLEMS KARL KUNISCH, KONSTANTIN PIEPER, AND BORIS VEXLER Abstract. A directional sparsity framework allowing for measure valued controls

More information

On an Approximation Result for Piecewise Polynomial Functions. O. Karakashian

On an Approximation Result for Piecewise Polynomial Functions. O. Karakashian BULLETIN OF THE GREE MATHEMATICAL SOCIETY Volume 57, 010 (1 7) On an Approximation Result for Piecewise Polynomial Functions O. arakashian Abstract We provide a new approach for proving approximation results

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

b i (x) u + c(x)u = f in Ω,

b i (x) u + c(x)u = f in Ω, SIAM J. NUMER. ANAL. Vol. 39, No. 6, pp. 1938 1953 c 2002 Society for Industrial and Applied Mathematics SUBOPTIMAL AND OPTIMAL CONVERGENCE IN MIXED FINITE ELEMENT METHODS ALAN DEMLOW Abstract. An elliptic

More information

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction J. Korean Math. Soc. 38 (2001), No. 3, pp. 683 695 ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE Sangho Kum and Gue Myung Lee Abstract. In this paper we are concerned with theoretical properties

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Design of optimal RF pulses for NMR as a discrete-valued control problem

Design of optimal RF pulses for NMR as a discrete-valued control problem Design of optimal RF pulses for NMR as a discrete-valued control problem Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Carla Tameling (Göttingen) and Benedikt Wirth

More information

1 Constrained Optimization

1 Constrained Optimization 1 Constrained Optimization Let B be an M N matrix, a linear operator from the space IR N to IR M with adjoint B T. For (column vectors) x IR N and y IR M we have x B T y = Bx y. This vanishes for all y

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

A REVIEW OF OPTIMIZATION

A REVIEW OF OPTIMIZATION 1 OPTIMAL DESIGN OF STRUCTURES (MAP 562) G. ALLAIRE December 17th, 2014 Department of Applied Mathematics, Ecole Polytechnique CHAPTER III A REVIEW OF OPTIMIZATION 2 DEFINITIONS Let V be a Banach space,

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp

ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2000, pp Mathematical Modelling and Numerical Analysis ESAIM: M2AN Modélisation Mathématique et Analyse Numérique Vol. 34, N o 4, 2, pp. 799 8 STRUCTURAL PROPERTIES OF SOLUTIONS TO TOTAL VARIATION REGULARIZATION

More information

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Journal of Functional Analysis 253 (2007) 772 781 www.elsevier.com/locate/jfa Note Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Haskell Rosenthal Department of Mathematics,

More information

AN EXAMPLE OF FUNCTIONAL WHICH IS WEAKLY LOWER SEMICONTINUOUS ON W 1,p FOR EVERY p > 2 BUT NOT ON H0

AN EXAMPLE OF FUNCTIONAL WHICH IS WEAKLY LOWER SEMICONTINUOUS ON W 1,p FOR EVERY p > 2 BUT NOT ON H0 AN EXAMPLE OF FUNCTIONAL WHICH IS WEAKLY LOWER SEMICONTINUOUS ON W,p FOR EVERY p > BUT NOT ON H FERNANDO FARRONI, RAFFAELLA GIOVA AND FRANÇOIS MURAT Abstract. In this note we give an example of functional

More information

A Posteriori Estimates for Cost Functionals of Optimal Control Problems

A Posteriori Estimates for Cost Functionals of Optimal Control Problems A Posteriori Estimates for Cost Functionals of Optimal Control Problems Alexandra Gaevskaya, Ronald H.W. Hoppe,2 and Sergey Repin 3 Institute of Mathematics, Universität Augsburg, D-8659 Augsburg, Germany

More information

A Dykstra-like algorithm for two monotone operators

A Dykstra-like algorithm for two monotone operators A Dykstra-like algorithm for two monotone operators Heinz H. Bauschke and Patrick L. Combettes Abstract Dykstra s algorithm employs the projectors onto two closed convex sets in a Hilbert space to construct

More information

Numerical Modeling of Methane Hydrate Evolution

Numerical Modeling of Methane Hydrate Evolution Numerical Modeling of Methane Hydrate Evolution Nathan L. Gibson Joint work with F. P. Medina, M. Peszynska, R. E. Showalter Department of Mathematics SIAM Annual Meeting 2013 Friday, July 12 This work

More information

SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES

SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES M. HINTERMÜLLER AND J. RASCH Abstract. Path-following splitting and semismooth Newton methods for solving a class

More information

NONLINEAR FREDHOLM ALTERNATIVE FOR THE p-laplacian UNDER NONHOMOGENEOUS NEUMANN BOUNDARY CONDITION

NONLINEAR FREDHOLM ALTERNATIVE FOR THE p-laplacian UNDER NONHOMOGENEOUS NEUMANN BOUNDARY CONDITION Electronic Journal of Differential Equations, Vol. 2016 (2016), No. 210, pp. 1 7. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu NONLINEAR FREDHOLM ALTERNATIVE FOR THE p-laplacian

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

A NOTE ON THE LADYŽENSKAJA-BABUŠKA-BREZZI CONDITION

A NOTE ON THE LADYŽENSKAJA-BABUŠKA-BREZZI CONDITION A NOTE ON THE LADYŽENSKAJA-BABUŠKA-BREZZI CONDITION JOHNNY GUZMÁN, ABNER J. SALGADO, AND FRANCISCO-JAVIER SAYAS Abstract. The analysis of finite-element-like Galerkin discretization techniques for the

More information

Lecture 5. The Dual Cone and Dual Problem

Lecture 5. The Dual Cone and Dual Problem IE 8534 1 Lecture 5. The Dual Cone and Dual Problem IE 8534 2 For a convex cone K, its dual cone is defined as K = {y x, y 0, x K}. The inner-product can be replaced by x T y if the coordinates of the

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

On preconditioned Uzawa-type iterations for a saddle point problem with inequality constraints

On preconditioned Uzawa-type iterations for a saddle point problem with inequality constraints On preconditioned Uzawa-type iterations for a saddle point problem with inequality constraints Carsten Gräser and Ralf Kornhuber FU Berlin, FB Mathematik und Informatik (http://www.math.fu-berlin.de/rd/we-02/numerik/)

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

The speed of Shor s R-algorithm

The speed of Shor s R-algorithm IMA Journal of Numerical Analysis 2008) 28, 711 720 doi:10.1093/imanum/drn008 Advance Access publication on September 12, 2008 The speed of Shor s R-algorithm J. V. BURKE Department of Mathematics, University

More information

On a weighted total variation minimization problem

On a weighted total variation minimization problem On a weighted total variation minimization problem Guillaume Carlier CEREMADE Université Paris Dauphine carlier@ceremade.dauphine.fr Myriam Comte Laboratoire Jacques-Louis Lions, Université Pierre et Marie

More information

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization

A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization A First Order Primal-Dual Algorithm for Nonconvex T V q Regularization Thomas Möllenhoff, Evgeny Strekalovskiy, and Daniel Cremers TU Munich, Germany Abstract. We propose an efficient first order primal-dual

More information

Basic Principles of Weak Galerkin Finite Element Methods for PDEs

Basic Principles of Weak Galerkin Finite Element Methods for PDEs Basic Principles of Weak Galerkin Finite Element Methods for PDEs Junping Wang Computational Mathematics Division of Mathematical Sciences National Science Foundation Arlington, VA 22230 Polytopal Element

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

On the density of classes of closed convex sets with pointwise constraints in Sobolev spaces

On the density of classes of closed convex sets with pointwise constraints in Sobolev spaces On the density of classes of closed convex sets with pointwise constraints in Sobolev spaces M. Hintermüller a, C.N. Rautenberg a a Institute for Mathematics, Humboldt-Universität zu Berlin, Unter den

More information

Dirichlet s principle and well posedness of steady state solutions in peridynamics

Dirichlet s principle and well posedness of steady state solutions in peridynamics Dirichlet s principle and well posedness of steady state solutions in peridynamics Petronela Radu Work supported by NSF - DMS award 0908435 January 19, 2011 The steady state peridynamic model Consider

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

Obstacle problems and isotonicity

Obstacle problems and isotonicity Obstacle problems and isotonicity Thomas I. Seidman Revised version for NA-TMA: NA-D-06-00007R1+ [June 6, 2006] Abstract For variational inequalities of an abstract obstacle type, a comparison principle

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS PARTITION OF UNITY FOR THE STOES PROBLEM ON NONMATCHING GRIDS CONSTANTIN BACUTA AND JINCHAO XU Abstract. We consider the Stokes Problem on a plane polygonal domain Ω R 2. We propose a finite element method

More information

Weak sharp minima on Riemannian manifolds 1

Weak sharp minima on Riemannian manifolds 1 1 Chong Li Department of Mathematics Zhejiang University Hangzhou, 310027, P R China cli@zju.edu.cn April. 2010 Outline 1 2 Extensions of some results for optimization problems on Banach spaces 3 4 Some

More information

The Kuratowski Ryll-Nardzewski Theorem and semismooth Newton methods for Hamilton Jacobi Bellman equations

The Kuratowski Ryll-Nardzewski Theorem and semismooth Newton methods for Hamilton Jacobi Bellman equations The Kuratowski Ryll-Nardzewski Theorem and semismooth Newton methods for Hamilton Jacobi Bellman equations Iain Smears INRIA Paris Linz, November 2016 joint work with Endre Süli, University of Oxford Overview

More information

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization / Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

THE STOKES SYSTEM R.E. SHOWALTER

THE STOKES SYSTEM R.E. SHOWALTER THE STOKES SYSTEM R.E. SHOWALTER Contents 1. Stokes System 1 Stokes System 2 2. The Weak Solution of the Stokes System 3 3. The Strong Solution 4 4. The Normal Trace 6 5. The Mixed Problem 7 6. The Navier-Stokes

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Michel Théra LACO, UMR-CNRS 6090, Université de Limoges michel.thera@unilim.fr reporting joint work with E. Ernst and M. Volle

More information

Maximal Monotone Inclusions and Fitzpatrick Functions

Maximal Monotone Inclusions and Fitzpatrick Functions JOTA manuscript No. (will be inserted by the editor) Maximal Monotone Inclusions and Fitzpatrick Functions J. M. Borwein J. Dutta Communicated by Michel Thera. Abstract In this paper, we study maximal

More information

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

Universität des Saarlandes. Fachrichtung 6.1 Mathematik Universität des Saarlandes U N I V E R S I T A S S A R A V I E N I S S Fachrichtung 6.1 Mathematik Preprint Nr. 207 The elastic-plastic torsion problem: a posteriori error estimates for approximate solutions

More information

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems

Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Merit Functions and Descent Algorithms for a Class of Variational Inequality Problems Michael Patriksson June 15, 2011 Abstract. We consider a variational inequality problem, where the cost mapping is

More information

EXISTENCE RESULTS FOR SOME VARIATIONAL INEQUALITIES INVOLVING NON-NEGATIVE, NON-COERCITIVE BILINEAR FORMS. Georgi Chobanov

EXISTENCE RESULTS FOR SOME VARIATIONAL INEQUALITIES INVOLVING NON-NEGATIVE, NON-COERCITIVE BILINEAR FORMS. Georgi Chobanov Pliska Stud. Math. Bulgar. 23 (2014), 49 56 STUDIA MATHEMATICA BULGARICA EXISTENCE RESULTS FOR SOME VARIATIONAL INEQUALITIES INVOLVING NON-NEGATIVE, NON-COERCITIVE BILINEAR FORMS Georgi Chobanov Abstract.

More information

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT

ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT ENERGY METHODS IN IMAGE PROCESSING WITH EDGE ENHANCEMENT PRASHANT ATHAVALE Abstract. Digital images are can be realized as L 2 (R 2 objects. Noise is introduced in a digital image due to various reasons.

More information

Extended Monotropic Programming and Duality 1

Extended Monotropic Programming and Duality 1 March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where

More information

INF-SUP CONDITION FOR OPERATOR EQUATIONS

INF-SUP CONDITION FOR OPERATOR EQUATIONS INF-SUP CONDITION FOR OPERATOR EQUATIONS LONG CHEN We study the well-posedness of the operator equation (1) T u = f. where T is a linear and bounded operator between two linear vector spaces. We give equivalent

More information