Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs

Size: px
Start display at page:

Download "Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs"

Transcription

1 Adaptive discretization and first-order methods for nonsmooth inverse problems for PDEs Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Barbara Kaltenbacher, Tuomo Valkonen, Daniel Wachsmuth Turing/LMS Worshop Inverse Problems and Data Science Edinburgh, May 9, / 31

2 Motivation Tikhonov regularization min F(S(u) u U yδ ) + G α (u) F : Y R discrepancy term, y δ noisy data G α : X R regularization term, α > 0 S : U X Y forward operator, involves solution to PDE Adaptive regularization and discretization: regularization: u δ α(δ) u with S(u ) = y for δ 0 discretization: FE approximation u δ α,h uδ α for h 0 goal: combined choice (α, h)(δ) for nonsmooth functionals 2 / 31

3 Motivation Primal-dual extragradient method: Here: first-order algorithm for nonsmooth convex problems with linear operators [Chambolle/Pock 2011] very popular in imaging (TV denoising, deblurring,...) acceleration (Nesterov, O(1/k 2 ) convergence) version for nonlinear operators [Valkonen 2014] application to parameter identification for PDEs function space algorithm Difficulty: convergence proof requires set-valued analysis in infinite-dimensional spaces 3 / 31

4 Model problems L 1 -fitting min u S(u) y δ L 1 + α 2 u 2 L 2 L -fitting 1 min u 2 u 2 L s. t. S(u)(x) y δ (x) δ a. e. in Ω 2 S : U L 2 (Ω) L 2 (Ω), S(u) =: y satisfies { y + uy = f ν y = 0 4 / 31

5 1 Overview 2 Adaptive discretization 3 Algorithm 4 Set-valued analysis Metric regularity Stability of saddle points 5 Application to parameter identification L 1 fitting L fitting 6 Numerical examples 5 / 31

6 Adaptive discretization Tikhonov functional for α > 0 u δ α = arg min u U J α(u) := F(S(u) y δ ) + G α (u) Discretized Tikhonov functional for α > 0, h > 0 u δ α,h = arg min u U J α,h(u) := F(S h (u) y δ ) + G α (u) S h (conforming) finite element approximation of S S h (u) S(u) for h 0 for any u U 6 / 31

7 Adaptive discretization Choose for given δ > 0: 1 α = α(δ) such that δ F(S(u δ α) y δ ) τδ 2 h = h(α, δ) such that Then, for all δ > 0: F(S(u δ α,h yδ )) F(S(u δ α) y δ ) c 1 δ J α,h (u δ α,h ) J α(u δ α) c 2 δ G α (u δ α,h ) G α(u ), F(S h (u δ α,h ) yδ ) τδ no direct control of u δ α,h uδ α X needed! convergence, rates for u δ α,h u X 0 for δ 0 (standard) 6 / 31

8 Functional error estimator Goal: a posteriori error estimator for discrepancy, functional accuracy Here: linear operator S = A 1 for elliptic differential operator A Functional error estimator: If there exist φ α, ψ with 1 λ(1 λ)ψ(y 1, y 2 ) λf(y 1 ) + (1 λ)f(y 2 ) F(λy 1 + (1 λ)y 2 ) 2 λ(1 λ)φ α (u 1, u 2 ) λg α (u 1 ) + (1 λ)g α (u 2 ) G α (λu 1 + (1 λ)u 2 ) Then, for all u U, y Y: ψ(su δ α, Su) + φ α (u δ α, u) F(Su) + F ( y) + G α (v) + G α(s y) F, G α Fenchel conjugates, S adjoint proof by strong convexity, weak duality apply for u = u δ α,h, y = Suδ α,h 7 / 31

9 Application to quadratic penalty X, Y Hilbert space: F(y) = 1 2 y yδ 2 Y G α (u) = α 2 u 2 X ψ(y 1, y 2 ) = 1 2 y 1 y 2 2 Y φ α (u 1, u 2 ) = α 2 u 1 u 2 2 X F (y) = 1 2 y 2 Y (y, y δ ) (G α ) (u) = 1 2α u 2 X Functional error estimate for ū = u δ α, ū h = u δ α,h α ū ū h X + Sū Sū h 1 α αū h + S (Sū h y δ ) 2 X + Sū Sū h 2 Y 8 / 31

10 Application to quadratic penalty Residual estimate for right-hand side for ȳ h = S h ū h, ȳ = Sū α ū ū h X + ȳ ȳ h 2 α S ρ w + ρ u 2 X + 4 Sρ y 2 Y J α (ū h ) J α (ū) 1 2α S ρ w + ρ u 2 X + (Sū h y δ, Sρ y ) Y ρ w := A w h + ȳ h y δ ρ u := αū h w h = αū h S h (ȳ h y δ ) ρ y := Aȳ h + ū h residuals of optimality conditions standard element-based a posteriori estimators for rhs 8 / 31

11 Application to norm penalty X Banach space (e.g., X = M(Ω) sparsity) G α (u) = α u X φ α (u 1, u 2 ) = 0 (G α ) (u ) = ι αbx (u ) := { 0 u x α, Functional estimate with y = κȳ h + (1 κ)y δ else Sū Sū h 2 Y 2α ū h X + 2 ū h, S (y κ(ȳ h y δ )) + Sū h κȳ h + (1 κ)y δ 2 Y for κ such that S y X α (can be estimated) 9 / 31

12 Application to norm penalty Residual estimate for right-hand side for w h = S h (y δ ȳ h ) ȳ h ȳ 2 Y 4(α ū h X ū h, w h ) + 4κ ū h, w h S (y δ ȳ h ) + 4(1 κ) ū h, w h + 4 Sρ y + (κ 1)(ȳ h y δ ) 2 Y J α (ū h ) J α (ū) α ū h X ū h, w h + κ ū h, w h S (y δ ȳ h ) + (1 κ) ū h, w h + Sρ y + (κ 1)(ȳ h g δ ) Y ȳ h y δ Y (non-)standard element-based a posteriori estimators for rhs X = C 0 (Ω) L a posteriori estimators for κ no bound on ū h ū, not needed for convergence! 9 / 31

13 1 Overview 2 Adaptive discretization 3 Algorithm 4 Set-valued analysis Metric regularity Stability of saddle points 5 Application to parameter identification L 1 fitting L fitting 6 Numerical examples 10 / 31

14 Problem statement min F(K(u)) + G(u) u X F : Y R, G : X R X, Y Hilbert spaces convex, lower semicontinuous K C 2 (X, Y) nonlinear (here: K(u) = S(u) y δ ) saddle point formulation: min sup G(u) + K(u), v F (v) u X v Y F : Y R Fenchel conjugate 11 / 31

15 Algorithm K linear: u k+1 = prox τg (u k τk v k ) ū k+1 = 2u k+1 u k v k+1 = prox σf (v k + σkū k+1 ) σ, τ step sizes, στ < K 2 prox σf (v) = arg min u 1 2σ u v 2 + F(u) proximal mapping 12 / 31

16 Algorithm K nonlinear: u k+1 = prox τg (u k τk (u k ) v k ) ū k+1 = 2u k+1 u k v k+1 = prox σf (v k + σk(ū k+1 )) σ, τ step sizes, στ < sup u BR K (u) 2 prox σf (v) = arg min u K (u) Fréchet derivative, 1 2σ u v 2 + F(u) proximal mapping K (u) adjoint 12 / 31

17 Algorithm K nonlinear, accelerated: u k+1 = prox τk G(u k τ k K (u k ) v k ) ω k = 1/ 1 + 2cτ k τ k+1 = ω k τ k σ k+1 = σ k /ω k ū k+1 = u k+1 + ω k (u k+1 u k ) v k+1 = prox σk+1 F (vk + σ k+1 K(ū k+1 )) σ, τ step sizes, σ 0 τ 0 < sup u BR K (u) 2 prox σf (v) = arg min u K (u) Fréchet derivative, c 0 acceleration parameter 1 2σ u v 2 + F(u) proximal mapping K (u) adjoint 12 / 31

18 Convergence Theorem Iterates converge locally to saddle point (ū, v) if 1 G is c G -strongly convex (here: c G = 1) 2 c = c n [0, c G ), c n = 0 for n > N N (finite acceleration) 3 metric regularity around saddle point (cf. [Valkonen 2014]) allows for inexact evaluation of K ( adaptive discretization) Difficulty: metric regularity in function spaces requires infinite-dimensional set-valued analysis here: only rough outline, no details 13 / 31

19 1 Overview 2 Adaptive discretization 3 Algorithm 4 Set-valued analysis Metric regularity Stability of saddle points 5 Application to parameter identification L 1 fitting L fitting 6 Numerical examples 14 / 31

20 Metric regularity Saddle-point problem min sup G(u) + K(u), v F (v) u X v Y Primal-dual optimality conditions { K(ū) F ( v) K (ū) v G(ū) 15 / 31

21 Metric regularity Primal-dual optimality conditions { K(ū) F ( v) K (ū) v G(ū) Set inclusion for H : L 2 (Ω) 2 L 2 (Ω) 2 0 Hū(ū, v) := ( G(ū) + K (ū) ) v F ( v) K(ū) 15 / 31

22 Metric regularity Set inclusion for H : L 2 (Ω) 2 L 2 (Ω) 2 0 Hū(ū, v) := ( G(ū) + K (ū) ) v F ( v) K(ū) Metric regularity at (ū, v) inf q (ū, v) L w for all w ρ q: w Hū(q) interpretation: small perturbation w of 0 small perturbation q of saddle point (ū, v) Lipschitz property for set-valued H 1 ū at ((ū, v), 0) 15 / 31

23 Metric regularity Metric regularity at (ū, v) inf q (ū, v) L w for all w ρ q: w Hū(q) Mordukhovich criterion { } L R = inf sup D R(q w ) q B((ū, v), t), w R(q ) B(w, t) t>0 Aubin constant L H 1 is minimal choice of L D R regular coderivative of R(= H 1 ) (cf. L = f for f C 1 ) set-valued analysis in function spaces 15 / 31

24 Set-valued analysis in function space Difficulties: Here: multiple non-equivalent concepts (regular, limiting) calculus not tight set-valued mappings from subdifferentials of pointwise functionals infinite-dimensional (regular) derivatives pointwise via nice finite-dimensional (regular, graphical) derivatives cf. pointwise Fenchel conjugates, subdifferentials [Ekeland] 16 / 31

25 Application to saddle point inclusion 0 Hū(ū, v) = ( G(ū) + K (ū) ) v F ( v) K(ū) ( D[ G](u ξ K DHū(q w)( q) = (ū) v)( u) + K (ū) ) v D[ F ](v η + K (ū)u + cū)( v) K (ū) u { T q q + V(q w) DHū(q w)( q) q V(q w) = otherwise q = (u, v), w = (ξ, η), cū = K(ū) K (ū)ū T q linear Operator (independent of w), V(q w) cone D Hū(q w) = [ DHū(q w)] + upper adjoint of convexification 17 / 31

26 Mordukhovich criterion DHū(q w)( q) = { T q q + V(q w) q V(q w) otherwise Then: Aubin constant L H c < iff sup t>0 inf ( w,z) W t (q w), w >0 T q w z w c 1 > 0 W t (q w) = { V(q w ) V(q w ) w Hū(q ), q q < t, w w < t } 18 / 31

27 1 Overview 2 Adaptive discretization 3 Algorithm 4 Set-valued analysis Metric regularity Stability of saddle points 5 Application to parameter identification L 1 fitting L fitting 6 Numerical examples 19 / 31

28 Application to PDEs Primal-dual optimality conditions { S(ū) y δ F ( v) S (ū) v = ū Metric regularity around (ū, v) if either 1 sup inf t>0 { S (ū)s (ū) z ν z (z, ν) V t F ( v y δ S(ū)), z 0 2 Moreau Yosida regularization: F F γ := F + γ finite-dimensional data: Y Y h } > 0 In case 1: S (ū) z c z for z V t F ( v y δ S(ū)) necessary! 20 / 31

29 L 1 fitting 1 min u α S(u) yδ L u 2 L 2 F(y) = α 1 y(x) dx f (z) = ι [ α 1,α 1 ](z) Ω S : U L 2 (Ω) L 2 (Ω), S(u) =: y satisfies { y + uy = f ν y = 0 21 / 31

30 L 1 fitting: metric regularity Here: z V F t (v η) if {0} v (x) = α 1 and η (x) 0 z(x) sign v (x)[0, ) v (x) = α 1 and η (x) = 0 R v (x) < α 1 and η (x) = 0 for some v v t, η η t S compact operator: S (ū) z c z only holds for z = 0 η = S(ū) y δ, α v sign η in general not satisfied! 22 / 31

31 L 1 fitting: algorithm z k+1 = S (u k ) v k u k+1 = τ k (u k τ k z k+1 ) ω k = 1/ 1 + 2cτ k τ k+1 = ω k τ k σ k+1 = σ k /ω k ū k+1 = u k+1 + ω k (u k+1 u k ) ( ) v k+1 = proj 1 [ α 1,α 1 ] 1+σ k+1 γ (vk + σ k+1 (S(ū k+1 ) y δ )) S (u k ) v k solution of adjoint equation proj C pointwise projection on convex set C R Moreau Yosida parameter γ 0 local convergence if γ > 0 or finite-dimensional 23 / 31

32 L fitting 1 min u 2 u 2 L s. t. S(u)(x) y δ (x) δ a. e. in Ω 2 F(y) = ι { y(x) δ} (y) f (z) = δ z S : U L 2 (Ω) L 2 (Ω), S(u) =: y satisfies { y + uy = f ν y = 0 for t = 0: z(x) = 0 if S(ū)(x) y δ (x) < δ estimate unlikely 24 / 31

33 L fitting: algorithm z k+1 = S (u k ) v k u k+1 1 = (u k τ k z k+1 ) 1 + τ k ω k = 1/ 1 + 2cτ k τ k+1 = ω k τ k σ k+1 = σ k /ω k ū k+1 = u k+1 + ω k (u k+1 u k ) v k+1 = v k + σ k+1 (S(ū k+1 ) y δ ) v k+1 1 = 1+σ k+1 γ ( vk+1 δσ) + sign( v k+1 ) local convergence if γ > 0 or finite-dimensional 25 / 31

34 1 Overview 2 Adaptive discretization 3 Algorithm 4 Set-valued analysis Metric regularity Stability of saddle points 5 Application to parameter identification L 1 fitting L fitting 6 Numerical examples 26 / 31

35 L 1 fitting Ω = [ 1, 1], FE (P 1 P 0 ) discretization of (y, u) random impulsive noise: { y δ y (x) + y ξ(x) with probability 0.3 (x) = y (x) else y = S(u ), ξ(x) N(0, 0.1), α = 10 2 σ 0 = L 1, τ 0 = 0.99 L 1, L = S(u 0 ) / u 0 γ = 10 12, u 0 1, v 0 = 0 (no warmstart!) compare c {0, }, N {100, 1000, 10000} 27 / 31

36 L 1 fitting: acceleration (same data) c G = 0 c G 1 Jγ(u i ) / 31

37 L 1 fitting: discretization (avg. of 10) 10 3 N = 100 N = 1000 N = Jγ(u i ) / 31

38 L fitting Ω = [ 1, 1], FE (P 1 P 0 ) discretization of (y, u) quantization noise: round y to n b = 11 equidistant values γ = 10 12, σ 0 = L 1, τ 0 = 0.99 L 1, L = S(u 0 ) / u 0 u 0 1, v 0 = 0 (no warmstart!) compare c {0, }, N {100, 1000, 10000} 29 / 31

39 L fitting: acceleration (same data) c G = 0 c G 1 Jγ(u i ) / 31

40 L fitting: discretization (same data) N = 100 N = 1000 N = Jγ(u i ) / 31

41 Conclusion Primal-dual extragradient methods in function space: can be accelerated analyzed using set-valued analysis in function space requires Moreau Yosida regularization no norm gap, continuation needed; mesh-independence Outlook: full acceleration partial stability (w.r.t. primal variable only) adaptive discretization using functional estimators Preprints/code: 31 / 31

Design of optimal RF pulses for NMR as a discrete-valued control problem

Design of optimal RF pulses for NMR as a discrete-valued control problem Design of optimal RF pulses for NMR as a discrete-valued control problem Christian Clason Faculty of Mathematics, Universität Duisburg-Essen joint work with Carla Tameling (Göttingen) and Benedikt Wirth

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

Regularization in Banach Space

Regularization in Banach Space Regularization in Banach Space Barbara Kaltenbacher, Alpen-Adria-Universität Klagenfurt joint work with Uno Hämarik, University of Tartu Bernd Hofmann, Technical University of Chemnitz Urve Kangro, University

More information

A primal-dual hybrid gradient method for non-linear operators with applications

A primal-dual hybrid gradient method for non-linear operators with applications A primal-dual hybrid gradient method for non-linear operators with applications to MRI Tuomo Valkonen Abstract We study the solution of minimax problems min x max y G(x) + K(x), y F (y) in finite-dimensional

More information

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL

OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL OPTIMALITY CONDITIONS AND ERROR ANALYSIS OF SEMILINEAR ELLIPTIC CONTROL PROBLEMS WITH L 1 COST FUNCTIONAL EDUARDO CASAS, ROLAND HERZOG, AND GERD WACHSMUTH Abstract. Semilinear elliptic optimal control

More information

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015

BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA. Talk given at the SPCOM Adelaide, Australia, February 2015 CODERIVATIVE CHARACTERIZATIONS OF MAXIMAL MONOTONICITY BORIS MORDUKHOVICH Wayne State University Detroit, MI 48202, USA Talk given at the SPCOM 2015 Adelaide, Australia, February 2015 Based on joint papers

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Bayesian inverse problems with Laplacian noise

Bayesian inverse problems with Laplacian noise Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June 2017 1 / 33 Outline 1 Inverse heat

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Adaptive methods for control problems with finite-dimensional control space

Adaptive methods for control problems with finite-dimensional control space Adaptive methods for control problems with finite-dimensional control space Saheed Akindeinde and Daniel Wachsmuth Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Convex Optimization Conjugate, Subdifferential, Proximation

Convex Optimization Conjugate, Subdifferential, Proximation 1 Lecture Notes, HCI, 3.11.211 Chapter 6 Convex Optimization Conjugate, Subdifferential, Proximation Bastian Goldlücke Computer Vision Group Technical University of Munich 2 Bastian Goldlücke Overview

More information

Levenberg-Marquardt method in Banach spaces with general convex regularization terms

Levenberg-Marquardt method in Banach spaces with general convex regularization terms Levenberg-Marquardt method in Banach spaces with general convex regularization terms Qinian Jin Hongqi Yang Abstract We propose a Levenberg-Marquardt method with general uniformly convex regularization

More information

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection

A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection EUSIPCO 2015 1/19 A Parallel Block-Coordinate Approach for Primal-Dual Splitting with Arbitrary Random Block Selection Jean-Christophe Pesquet Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est

More information

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles

Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Convex Optimization on Large-Scale Domains Given by Linear Minimization Oracles Arkadi Nemirovski H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Joint research

More information

A Tutorial on Primal-Dual Algorithm

A Tutorial on Primal-Dual Algorithm A Tutorial on Primal-Dual Algorithm Shenlong Wang University of Toronto March 31, 2016 1 / 34 Energy minimization MAP Inference for MRFs Typical energies consist of a regularization term and a data term.

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems Radu Ioan Boţ Ernö Robert Csetnek August 5, 014 Abstract. In this paper we analyze the

More information

On second order sufficient optimality conditions for quasilinear elliptic boundary control problems

On second order sufficient optimality conditions for quasilinear elliptic boundary control problems On second order sufficient optimality conditions for quasilinear elliptic boundary control problems Vili Dhamo Technische Universität Berlin Joint work with Eduardo Casas Workshop on PDE Constrained Optimization

More information

Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation

Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Fenchel-Moreau Conjugates of Inf-Transforms and Application to Stochastic Bellman Equation Jean-Philippe Chancelier and Michel De Lara CERMICS, École des Ponts ParisTech First Conference on Discrete Optimization

More information

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods Renato D.C. Monteiro B. F. Svaiter May 10, 011 Revised: May 4, 01) Abstract This

More information

K. Krumbiegel I. Neitzel A. Rösch

K. Krumbiegel I. Neitzel A. Rösch SUFFICIENT OPTIMALITY CONDITIONS FOR THE MOREAU-YOSIDA-TYPE REGULARIZATION CONCEPT APPLIED TO SEMILINEAR ELLIPTIC OPTIMAL CONTROL PROBLEMS WITH POINTWISE STATE CONSTRAINTS K. Krumbiegel I. Neitzel A. Rösch

More information

Proximal methods. S. Villa. October 7, 2014

Proximal methods. S. Villa. October 7, 2014 Proximal methods S. Villa October 7, 2014 1 Review of the basics Often machine learning problems require the solution of minimization problems. For instance, the ERM algorithm requires to solve a problem

More information

arxiv: v1 [math.na] 3 Jan 2019

arxiv: v1 [math.na] 3 Jan 2019 arxiv manuscript No. (will be inserted by the editor) A Finite Element Nonoverlapping Domain Decomposition Method with Lagrange Multipliers for the Dual Total Variation Minimizations Chang-Ock Lee Jongho

More information

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method

Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Article Investigating the Influence of Box-Constraints on the Solution of a Total Variation Model via an Efficient Primal-Dual Method Andreas Langer Department of Mathematics, University of Stuttgart,

More information

On the acceleration of the double smoothing technique for unconstrained convex optimization problems

On the acceleration of the double smoothing technique for unconstrained convex optimization problems On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities

More information

Proximal Methods for Optimization with Spasity-inducing Norms

Proximal Methods for Optimization with Spasity-inducing Norms Proximal Methods for Optimization with Spasity-inducing Norms Group Learning Presentation Xiaowei Zhou Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

Sparse Regularization via Convex Analysis

Sparse Regularization via Convex Analysis Sparse Regularization via Convex Analysis Ivan Selesnick Electrical and Computer Engineering Tandon School of Engineering New York University Brooklyn, New York, USA 29 / 66 Convex or non-convex: Which

More information

Recovering piecewise smooth multichannel images by minimization of convex functionals with total generalized variation penalty

Recovering piecewise smooth multichannel images by minimization of convex functionals with total generalized variation penalty SpezialForschungsBereich F 32 Karl Franzens Universität Graz Technische Universität Graz Medizinische Universität Graz Recovering piecewise smooth multichannel images by minimization of convex functionals

More information

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS

REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS REGULAR LAGRANGE MULTIPLIERS FOR CONTROL PROBLEMS WITH MIXED POINTWISE CONTROL-STATE CONSTRAINTS fredi tröltzsch 1 Abstract. A class of quadratic optimization problems in Hilbert spaces is considered,

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Accelerated primal-dual methods for linearly constrained convex problems

Accelerated primal-dual methods for linearly constrained convex problems Accelerated primal-dual methods for linearly constrained convex problems Yangyang Xu SIAM Conference on Optimization May 24, 2017 1 / 23 Accelerated proximal gradient For convex composite problem: minimize

More information

Dual methods for the minimization of the total variation

Dual methods for the minimization of the total variation 1 / 30 Dual methods for the minimization of the total variation Rémy Abergel supervisor Lionel Moisan MAP5 - CNRS UMR 8145 Different Learning Seminar, LTCI Thursday 21st April 2016 2 / 30 Plan 1 Introduction

More information

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems) Donghwan Kim and Jeffrey A. Fessler EECS Department, University of Michigan

More information

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29

In collaboration with J.-C. Pesquet A. Repetti EC (UPE) IFPEN 16 Dec / 29 A Random block-coordinate primal-dual proximal algorithm with application to 3D mesh denoising Emilie CHOUZENOUX Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est, France Horizon Maths 2014

More information

1 Sparsity and l 1 relaxation

1 Sparsity and l 1 relaxation 6.883 Learning with Combinatorial Structure Note for Lecture 2 Author: Chiyuan Zhang Sparsity and l relaxation Last time we talked about sparsity and characterized when an l relaxation could recover the

More information

Variational Image Restoration

Variational Image Restoration Variational Image Restoration Yuling Jiao yljiaostatistics@znufe.edu.cn School of and Statistics and Mathematics ZNUFE Dec 30, 2014 Outline 1 1 Classical Variational Restoration Models and Algorithms 1.1

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization

The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization The impact of a curious type of smoothness conditions on convergence rates in l 1 -regularization Radu Ioan Boț and Bernd Hofmann March 1, 2013 Abstract Tikhonov-type regularization of linear and nonlinear

More information

Conditioning of linear-quadratic two-stage stochastic programming problems

Conditioning of linear-quadratic two-stage stochastic programming problems Conditioning of linear-quadratic two-stage stochastic programming problems W. Römisch Humboldt-University Berlin Institute of Mathematics http://www.math.hu-berlin.de/~romisch (K. Emich, R. Henrion) (WIAS

More information

FULL STABILITY IN FINITE-DIMENSIONAL OPTIMIZATION

FULL STABILITY IN FINITE-DIMENSIONAL OPTIMIZATION FULL STABILITY IN FINITE-DIMENSIONAL OPTIMIZATION B. S. MORDUKHOVICH 1, T. T. A. NGHIA 2 and R. T. ROCKAFELLAR 3 Abstract. The paper is devoted to full stability of optimal solutions in general settings

More information

PDE Constrained Optimization selected Proofs

PDE Constrained Optimization selected Proofs PDE Constrained Optimization selected Proofs Jeff Snider jeff@snider.com George Mason University Department of Mathematical Sciences April, 214 Outline 1 Prelim 2 Thms 3.9 3.11 3 Thm 3.12 4 Thm 3.13 5

More information

Adaptive and multilevel methods for parameter identification in partial differential equations

Adaptive and multilevel methods for parameter identification in partial differential equations Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum,

More information

arxiv: v1 [math.oc] 21 Apr 2016

arxiv: v1 [math.oc] 21 Apr 2016 Accelerated Douglas Rachford methods for the solution of convex-concave saddle-point problems Kristian Bredies Hongpeng Sun April, 06 arxiv:604.068v [math.oc] Apr 06 Abstract We study acceleration and

More information

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher

arxiv: v1 [math.na] 21 Aug 2014 Barbara Kaltenbacher ENHANCED CHOICE OF THE PARAMETERS IN AN ITERATIVELY REGULARIZED NEWTON- LANDWEBER ITERATION IN BANACH SPACE arxiv:48.526v [math.na] 2 Aug 24 Barbara Kaltenbacher Alpen-Adria-Universität Klagenfurt Universitätstrasse

More information

Generalized Local Regularization for Ill-Posed Problems

Generalized Local Regularization for Ill-Posed Problems Generalized Local Regularization for Ill-Posed Problems Patricia K. Lamm Department of Mathematics Michigan State University AIP29 July 22, 29 Based on joint work with Cara Brooks, Zhewei Dai, and Xiaoyue

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 38, pp. 44-68,. Copyright,. ISSN 68-963. ETNA ERROR ESTIMATES FOR GENERAL FIDELITIES MARTIN BENNING AND MARTIN BURGER Abstract. Appropriate error estimation

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

M. Marques Alves Marina Geremia. November 30, 2017

M. Marques Alves Marina Geremia. November 30, 2017 Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng s F-B four-operator splitting method for solving monotone inclusions M. Marques Alves Marina Geremia November

More information

SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES

SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES SEVERAL PATH-FOLLOWING METHODS FOR A CLASS OF GRADIENT CONSTRAINED VARIATIONAL INEQUALITIES M. HINTERMÜLLER AND J. RASCH Abstract. Path-following splitting and semismooth Newton methods for solving a class

More information

PDEs in Image Processing, Tutorials

PDEs in Image Processing, Tutorials PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower

More information

A primal dual hybrid gradient method for nonlinear operators with applications to MRI

A primal dual hybrid gradient method for nonlinear operators with applications to MRI Inverse Problems PAPER OPEN ACCESS A primal dual hybrid gradient method for nonlinear operators with applications to MRI To cite this article: Tuomo Valkonen 2014 Inverse Problems 30 055012 View the article

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

A duality-based approach to elliptic control problems in non-reflexive Banach spaces

A duality-based approach to elliptic control problems in non-reflexive Banach spaces A duality-based approach to elliptic control problems in non-reflexive Banach spaces Christian Clason Karl Kunisch June 3, 29 Convex duality is a powerful framework for solving non-smooth optimal control

More information

A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems

A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems A Globally Linearly Convergent Method for Pointwise Quadratically Supportable Convex-Concave Saddle Point Problems D. Russell Luke and Ron Shefi February 28, 2017 Abstract We study the Proximal Alternating

More information

Necessary conditions for convergence rates of regularizations of optimal control problems

Necessary conditions for convergence rates of regularizations of optimal control problems Necessary conditions for convergence rates of regularizations of optimal control problems Daniel Wachsmuth and Gerd Wachsmuth Johann Radon Institute for Computational and Applied Mathematics RICAM), Austrian

More information

A range condition for polyconvex variational regularization

A range condition for polyconvex variational regularization www.oeaw.ac.at A range condition for polyconvex variational regularization C. Kirisits, O. Scherzer RICAM-Report 2018-04 www.ricam.oeaw.ac.at A range condition for polyconvex variational regularization

More information

Perturbation Analysis of Optimization Problems

Perturbation Analysis of Optimization Problems Perturbation Analysis of Optimization Problems J. Frédéric Bonnans 1 and Alexander Shapiro 2 1 INRIA-Rocquencourt, Domaine de Voluceau, B.P. 105, 78153 Rocquencourt, France, and Ecole Polytechnique, France

More information

An Optimal Algorithm for Stochastic Three-Composite Optimization

An Optimal Algorithm for Stochastic Three-Composite Optimization An Optimal Algorithm for Stochastic Three-Composite Optimization Renbo Zhao 1, William B. Haskell 2, Vincent Y. F. Tan 3 1 ORC, Massachusetts Institute of Technology 2 Dept. ISEM, National University of

More information

Implicit Multifunction Theorems

Implicit Multifunction Theorems Implicit Multifunction Theorems Yuri S. Ledyaev 1 and Qiji J. Zhu 2 Department of Mathematics and Statistics Western Michigan University Kalamazoo, MI 49008 Abstract. We prove a general implicit function

More information

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms Peter Ochs, Jalal Fadili, and Thomas Brox Saarland University, Saarbrücken, Germany Normandie Univ, ENSICAEN, CNRS, GREYC, France

More information

NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION. R. T. Rockafellar*

NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION. R. T. Rockafellar* NONSMOOTH ANALYSIS AND PARAMETRIC OPTIMIZATION R. T. Rockafellar* Abstract. In an optimization problem that depends on parameters, an important issue is the effect that perturbations of the parameters

More information

A Total-Variation-based JPEG decompression model

A Total-Variation-based JPEG decompression model SpezialForschungsBereich F 32 Karl Franzens Universit at Graz Technische Universit at Graz Medizinische Universit at Graz A Total-Variation-based JPEG decompression model M. Holler K. Bredies SFB-Report

More information

Variable Metric Forward-Backward Algorithm

Variable Metric Forward-Backward Algorithm Variable Metric Forward-Backward Algorithm 1/37 Variable Metric Forward-Backward Algorithm for minimizing the sum of a differentiable function and a convex function E. Chouzenoux in collaboration with

More information

Adaptive Primal Dual Optimization for Image Processing and Learning

Adaptive Primal Dual Optimization for Image Processing and Learning Adaptive Primal Dual Optimization for Image Processing and Learning Tom Goldstein Rice University tag7@rice.edu Ernie Esser University of British Columbia eesser@eos.ubc.ca Richard Baraniuk Rice University

More information

Dual and primal-dual methods

Dual and primal-dual methods ELE 538B: Large-Scale Optimization for Data Science Dual and primal-dual methods Yuxin Chen Princeton University, Spring 2018 Outline Dual proximal gradient method Primal-dual proximal gradient method

More information

Parameter Identification

Parameter Identification Lecture Notes Parameter Identification Winter School Inverse Problems 25 Martin Burger 1 Contents 1 Introduction 3 2 Examples of Parameter Identification Problems 5 2.1 Differentiation of Data...............................

More information

Solving DC Programs that Promote Group 1-Sparsity

Solving DC Programs that Promote Group 1-Sparsity Solving DC Programs that Promote Group 1-Sparsity Ernie Esser Contains joint work with Xiaoqun Zhang, Yifei Lou and Jack Xin SIAM Conference on Imaging Science Hong Kong Baptist University May 14 2014

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Université Paul Sabatier. Optimal Control of Partial Differential Equations. Jean-Pierre RAYMOND

Université Paul Sabatier. Optimal Control of Partial Differential Equations. Jean-Pierre RAYMOND 1 Université Paul Sabatier Optimal Control of Partial Differential Equations Jean-Pierre RAYMOND 2 Contents 1 Examples of control problems 7 1.1 What is a control problem?............................ 7

More information

Smoothing Proximal Gradient Method. General Structured Sparse Regression

Smoothing Proximal Gradient Method. General Structured Sparse Regression for General Structured Sparse Regression Xi Chen, Qihang Lin, Seyoung Kim, Jaime G. Carbonell, Eric P. Xing (Annals of Applied Statistics, 2012) Gatsby Unit, Tea Talk October 25, 2013 Outline Motivation:

More information

A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution

A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution Jianping Zhang and Ke Chen arxiv:509.04237v [cs.cv] 6 Sep 205 Abstract

More information

Numerical algorithms for one and two target optimal controls

Numerical algorithms for one and two target optimal controls Numerical algorithms for one and two target optimal controls Sung-Sik Kwon Department of Mathematics and Computer Science, North Carolina Central University 80 Fayetteville St. Durham, NC 27707 Email:

More information

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem

A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem A posteriori error estimates for the adaptivity technique for the Tikhonov functional and global convergence for a coefficient inverse problem Larisa Beilina Michael V. Klibanov December 18, 29 Abstract

More information

Math 273a: Optimization Convex Conjugacy

Math 273a: Optimization Convex Conjugacy Math 273a: Optimization Convex Conjugacy Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com Convex conjugate (the Legendre transform) Let f be a closed proper

More information

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS

SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS SEMI-SMOOTH SECOND-ORDER TYPE METHODS FOR COMPOSITE CONVEX PROGRAMS XIANTAO XIAO, YONGFENG LI, ZAIWEN WEN, AND LIWEI ZHANG Abstract. The goal of this paper is to study approaches to bridge the gap between

More information

The Kuratowski Ryll-Nardzewski Theorem and semismooth Newton methods for Hamilton Jacobi Bellman equations

The Kuratowski Ryll-Nardzewski Theorem and semismooth Newton methods for Hamilton Jacobi Bellman equations The Kuratowski Ryll-Nardzewski Theorem and semismooth Newton methods for Hamilton Jacobi Bellman equations Iain Smears INRIA Paris Linz, November 2016 joint work with Endre Süli, University of Oxford Overview

More information

Hölder Metric Subregularity with Applications to Proximal Point Method

Hölder Metric Subregularity with Applications to Proximal Point Method Hölder Metric Subregularity with Applications to Proximal Point Method GUOYIN LI and BORIS S. MORDUKHOVICH Revised Version: October 1, 01 Abstract This paper is mainly devoted to the study and applications

More information

Numerical approximation of output functionals for Maxwell equations

Numerical approximation of output functionals for Maxwell equations Numerical approximation of output functionals for Maxwell equations Ferenc Izsák ELTE, Budapest University of Twente, Enschede 11 September 2004 MAXWELL EQUATIONS Assumption: electric field ( electromagnetic

More information

Generalized greedy algorithms.

Generalized greedy algorithms. Generalized greedy algorithms. François-Xavier Dupé & Sandrine Anthoine LIF & I2M Aix-Marseille Université - CNRS - Ecole Centrale Marseille, Marseille ANR Greta Séminaire Parisien des Mathématiques Appliquées

More information

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems

An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems PGMO 1/32 An Overview of Recent and Brand New Primal-Dual Methods for Solving Convex Optimization Problems Emilie Chouzenoux Laboratoire d Informatique Gaspard Monge - CNRS Univ. Paris-Est Marne-la-Vallée,

More information

Efficiency of minimizing compositions of convex functions and smooth maps

Efficiency of minimizing compositions of convex functions and smooth maps Efficiency of minimizing compositions of convex functions and smooth maps D. Drusvyatskiy C. Paquette Abstract We consider global efficiency of algorithms for minimizing a sum of a convex function and

More information

Convex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE

Convex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE Convex Analysis Notes Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE These are notes from ORIE 6328, Convex Analysis, as taught by Prof. Adrian Lewis at Cornell University in the

More information

Linear convergence of iterative soft-thresholding

Linear convergence of iterative soft-thresholding arxiv:0709.1598v3 [math.fa] 11 Dec 007 Linear convergence of iterative soft-thresholding Kristian Bredies and Dirk A. Lorenz ABSTRACT. In this article, the convergence of the often used iterative softthresholding

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Robust Duality in Parametric Convex Optimization

Robust Duality in Parametric Convex Optimization Robust Duality in Parametric Convex Optimization R.I. Boţ V. Jeyakumar G.Y. Li Revised Version: June 20, 2012 Abstract Modelling of convex optimization in the face of data uncertainty often gives rise

More information

Optimal Control of Variational Inequalities of the Second Kind

Optimal Control of Variational Inequalities of the Second Kind 1 / 67 Optimal Control of Variational Inequalities of the Second Kind J. C. De los Reyes Centro de Modelización Matemática (MODEMAT) Escuela Politécnica Nacional Quito, Ecuador SPP 1962 Summer School on

More information

arxiv: v1 [math.oc] 12 Mar 2013

arxiv: v1 [math.oc] 12 Mar 2013 On the convergence rate improvement of a primal-dual splitting algorithm for solving monotone inclusion problems arxiv:303.875v [math.oc] Mar 03 Radu Ioan Boţ Ernö Robert Csetnek André Heinrich February

More information

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization Radu Ioan Boţ Christopher Hendrich November 7, 202 Abstract. In this paper we

More information

Dual Proximal Gradient Method

Dual Proximal Gradient Method Dual Proximal Gradient Method http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/19 1 proximal gradient method

More information

Aubin Criterion for Metric Regularity

Aubin Criterion for Metric Regularity Journal of Convex Analysis Volume 13 (2006), No. 2, 281 297 Aubin Criterion for Metric Regularity A. L. Dontchev Mathematical Reviews, Ann Arbor, MI 48107, USA ald@ams.org M. Quincampoix Laboratoire de

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

c 2013 Society for Industrial and Applied Mathematics

c 2013 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 3, No., pp. 109 115 c 013 Society for Industrial and Applied Mathematics AN ACCELERATED HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR CONVEX OPTIMIZATION AND ITS IMPLICATIONS TO SECOND-ORDER

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators

Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Morozov s discrepancy principle for Tikhonov-type functionals with non-linear operators Stephan W Anzengruber 1 and Ronny Ramlau 1,2 1 Johann Radon Institute for Computational and Applied Mathematics,

More information

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Dual Condition for the Convex Subdifferential Sum Formula with Applications Journal of Convex Analysis Volume 12 (2005), No. 2, 279 290 A Dual Condition for the Convex Subdifferential Sum Formula with Applications R. S. Burachik Engenharia de Sistemas e Computacao, COPPE-UFRJ

More information

NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS

NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS NUMERICAL SOLUTION OF NONSMOOTH PROBLEMS Abstract. Nonsmooth minimization problems arise in problems involving constraints or models describing certain discontinuities. The existence of solutions is established

More information

On robustness of the regularity property of maps

On robustness of the regularity property of maps Control and Cybernetics vol. 32 (2003) No. 3 On robustness of the regularity property of maps by Alexander D. Ioffe Department of Mathematics, Technion, Haifa 32000, Israel Abstract: The problem considered

More information