On the acceleration of the double smoothing technique for unconstrained convex optimization problems

Similar documents
Convergence analysis for a primal-dual monotone + skew splitting algorithm with applications to total variation minimization

On the convergence rate of a forward-backward type primal-dual splitting algorithm for convex optimization problems

A Brøndsted-Rockafellar Theorem for Diagonal Subdifferential Operators

On Gap Functions for Equilibrium Problems via Fenchel Duality

Almost Convex Functions: Conjugacy and Duality

On the relations between different duals assigned to composed optimization problems

A New Fenchel Dual Problem in Vector Optimization

Robust Duality in Parametric Convex Optimization

The Subdifferential of Convex Deviation Measures and Risk Functions

On the Brézis - Haraux - type approximation in nonreflexive Banach spaces

Optimization and Optimal Control in Banach Spaces

BASICS OF CONVEX ANALYSIS

Duality for almost convex optimization problems via the perturbation approach

Accelerated Dual Gradient-Based Methods for Total Variation Image Denoising/Deblurring Problems (and other Inverse Problems)

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Convex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE

STABLE AND TOTAL FENCHEL DUALITY FOR CONVEX OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

Optimality Conditions for Nonsmooth Convex Optimization

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Convex Optimization Notes

arxiv: v1 [math.oc] 12 Mar 2013

Victoria Martín-Márquez

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Optimization methods

STRONG AND CONVERSE FENCHEL DUALITY FOR VECTOR OPTIMIZATION PROBLEMS IN LOCALLY CONVEX SPACES

6. Proximal gradient method

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Brézis - Haraux - type approximation of the range of a monotone operator composed with a linear mapping

Inertial forward-backward methods for solving vector optimization problems

A Dual Condition for the Convex Subdifferential Sum Formula with Applications

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

Dedicated to Michel Théra in honor of his 70th birthday

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Subdifferential representation of convex functions: refinements and applications

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

ON GAP FUNCTIONS OF VARIATIONAL INEQUALITY IN A BANACH SPACE. Sangho Kum and Gue Myung Lee. 1. Introduction

An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions

Inertial Douglas-Rachford splitting for monotone inclusion problems

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Solving monotone inclusions involving parallel sums of linearly composed maximally monotone operators

Lagrange duality. The Lagrangian. We consider an optimization program of the form

PARALLEL SUBGRADIENT METHOD FOR NONSMOOTH CONVEX OPTIMIZATION WITH A SIMPLE CONSTRAINT

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

An inertial forward-backward method for solving vector optimization problems

SHORT COMMUNICATION. Communicated by Igor Konnov

Second order forward-backward dynamical systems for monotone inclusion problems

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Convex Functions. Pontus Giselsson

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Sequential Pareto Subdifferential Sum Rule And Sequential Effi ciency

On duality theory of conic linear problems

Monotone operators and bigger conjugate functions

Subgradient Projectors: Extensions, Theory, and Characterizations

Brøndsted-Rockafellar property of subdifferentials of prox-bounded functions. Marc Lassonde Université des Antilles et de la Guyane

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

arxiv: v2 [math.fa] 21 Jul 2013

ADMM for monotone operators: convergence analysis and rates

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

Self-dual Smooth Approximations of Convex Functions via the Proximal Average

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

A Unified Approach to Proximal Algorithms using Bregman Distance

Numerical methods for approximating. zeros of operators. and for solving variational inequalities. with applications

ON A CLASS OF NONSMOOTH COMPOSITE FUNCTIONS

The proximal mapping

Maximal Monotone Inclusions and Fitzpatrick Functions

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Maximal monotonicity for the precomposition with a linear operator

Proximal methods. S. Villa. October 7, 2014

Fast proximal gradient methods

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

3.1 Convexity notions for functions and basic properties

A convergence result for an Outer Approximation Scheme

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

LECTURE 13 LECTURE OUTLINE

WEAK CONVERGENCE OF RESOLVENTS OF MAXIMAL MONOTONE OPERATORS AND MOSCO CONVERGENCE

Helly's Theorem and its Equivalences via Convex Analysis

Convex Analysis and Optimization Chapter 2 Solutions

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Dual and primal-dual methods

The resolvent average of monotone operators: dominant and recessive properties

I teach myself... Hilbert spaces

The sum of two maximal monotone operator is of type FPV

A Dykstra-like algorithm for two monotone operators

LECTURE 12 LECTURE OUTLINE. Subgradients Fenchel inequality Sensitivity in constrained optimization Subdifferential calculus Optimality conditions

Extended Monotropic Programming and Duality 1

Approaching monotone inclusion problems via second order dynamical systems with linear and anisotropic damping

Semi-infinite programming, duality, discretization and optimality conditions

Preprint Stephan Dempe and Patrick Mehlitz Lipschitz continuity of the optimal value function in parametric optimization ISSN

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

Convex analysis and profit/cost/support functions

An adaptive accelerated first-order method for convex optimization

Chapter 1. Preliminaries

Lecture 5. The Dual Cone and Dual Problem

Optimization methods

Transcription:

On the acceleration of the double smoothing technique for unconstrained convex optimization problems Radu Ioan Boţ Christopher Hendrich October 10, 01 Abstract. In this article we investigate the possibilities of accelerating the double smoothing technique when solving unconstrained nondifferentiable convex optimization problems. This approach relies on the regularization in two steps of the Fenchel dual problem associated to the problem to be solved into an optimization problem having a differentiable strongly convex objective function with Lipschitz continuous gradient. The doubly regularized dual problem is then solved via a fast gradient method. The aim of this paper is to show how do the properties of the functions in the objective of the primal problem influence the implementation of the double smoothing approach and its rate of convergence. The theoretical results are applied to linear inverse problems by making use of different regularization functionals. Keywords. Fenchel duality, regularization, fast gradient method, image processing AMS subject classification. 90C5, 90C46, 47A5 1 Introduction In this paper we are developing an efficient algorithm based on the double smoothing approach for solving unconstrained nondifferentiable optimization problems of the type (P ) inf f(x) + g(ax)}, (1) x H where H is a real Hilbert space, f : H R and g : R m R are proper, convex and lower semicontinuous functions and A : H R m is a linear continuous operator fulfilling the feasibility condition A(dom f) dom g. The double smoothing technique for solving this class of optimization problems (see [8] for a fully finite-dimensional spaces version of it) assumes to efficiently solve the corresponding Fenchel dual problems and then to recover via an approximately optimal solution of the latter an approximately optimal solution of the primal. This technique, which represents a generalization of the approach developed in [10] for a special class of convex constrained optimization Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail: radu.bot@mathematik.tu-chemnitz.de. Research partially supported by DFG (German Research Foundation), project BO 516/4-1. Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail: christopher.hendrich@mathematik.tu-chemnitz.de. Research supported by a Graduate Fellowship of the Free State Saxony, Germany. 1

problems, makes use of the structure of the Fenchel dual and relies on the regularization of the latter in two steps into an optimization problem having a differentiable strongly convex objective function with Lipschitz continuous gradient. The regularized dual is then solved by a fast gradient method which gives ( rise( to )) a sequence of dual variables that solve the non-regularized dual problem after O 1 ln 1 iterations, whenever f and g have bounded effective domains. In addition, the norm of the gradient of the regularized dual objective decreases by the same rate of convergence, a fact which ( is( crucial )) in view of reconstructing an approximately optimal solution to (P ) after O 1 ln 1 iterations (see [8]). The first aim of this paper is to show that, whenever g is a strongly convex function, one can obtain the same convergence rate, even without imposing boundedness for its effective domain. Further we show that if, additionally, f is strongly convex or g is everywhere differentiable ( ( )) with a Lipschitz continuous gradient, then the convergence rate becomes O 1 ln 1, while, if these supplementary assumptions are simultaneous ( ( )) fulfilled, then a convergence rate of O ln 1 can be guaranteed. The structure of the paper is the following. The forthcoming section is dedicated to some preliminaries on convex analysis and Fenchel duality. In Section 3 we employ the smoothing technique introduced in [1 14] in order to make the objective of the Fenchel dual problem of (P ) to be strongly convex and differentiable with Lipschitz continuous gradient. In Section 4 we solve the regularized dual problem via an efficient fast gradient method, show how an approximately optimal primal solution can be recovered from a dual iterate and investigate the convergence properties of the sequence of primal optimal solutions. Section 5 addresses the question of how do the properties of the functions in the objective of (P ) influence the implementation of the double smoothing approach and improve its rate of convergence. Finally, in Section 6, we consider an application of the presented approach in image deblurring and solve to this end by a linear inverse problem by using two different regularization functionals. Preliminaries on convex analysis and Fenchel duality Throughout this paper, and =, denote the inner product and, respectively, the norm of the real Hilbert space H, which is allowed to be infinite dimensional. The closure of a set C H is denoted by cl(c), while its indicator function is the function δ C : H R := R ± } defined by δ C (x) = 0 for x C and δ C (x) = +, otherwise. For a function f : H R we denote by dom f := x H : f(x) < + } its effective domain. We call f proper if dom f and f(x) > for all x H. The conjugate function of f is f : H R, f (p) = sup p, x f(x) : x H} for all p H. The biconjugate function of f is f : H R, f (x) = sup x, p f (p) : p H} and, when f is proper, convex and lower semicontinuous, then, according to the Fenchel- Moreau Theorem, one has f = f. The (convex) subdifferential of the function f at x H is the set f(x) = p H : f(y) f(x) p, y x y H}, if f(x) R, and is taken to be the empty set, otherwise. Further, we consider the space R m endowed with the Euclidean inner product and norm, for which we use the same notations as for the real Hilbert space H, since no confusion can arise. By 1 m we denote the vector in R m with all entries equal to 1. For

a subset C of R m we denote by ri(c) its relative interior, i.e. the interior of the set C relative to its affine hull. For a linear continuous operator A : H R m the operator A : R m H, defined by A y, x = y, Ax for all x H and all y R m, is its so-called adjoint operator. By id : R m R m, id(x) = x, for all x R m we denote the identity mapping on R m. For a nonempty, convex and closed set C H we consider the projection operator P C : H C defined as x arg min z C x z. Having two functions f, g : H R, their infimal convolution is defined by f g : H R, (f g)(x) = inf y H f(y) + g(x y)} for all x H. The Moreau envelope of parameter γ > 0 of the function f : H R is γ f : H R, defined as the infimal convolution ( ) 1 γ f(x) := f γ (x) = inf f(y) + 1 } x y x H. y H γ The proximal point of f at x R n denotes the unique minimizer of the optimization problem inf f(y) + 1 } y R n x y. For > 0 we say that the function f : H R is -strongly convex, if for all x, y H and all λ (0, 1) it holds f(λx + (1 λ)y) λf(x) + (1 λ)f(y) λ(1 λ) x y. Notice that this is equivalent to saying that x f(x) x is convex. For the optimization problem (P ) we consider the following standing assumptions: f : H R is a proper, convex and lower semicontinuous function with a bounded effective domain, g : R m R is proper, µ-strongly convex (µ > 0) and lower semicontinuous function and A : H R m is a linear and continuous operator fulfilling A(dom f) dom g. Remark 1. Different to the investigations made in [8] in a fully finite-dimensional setting, we strengthen here the convexity assumptions on g (there g was asked to be only proper, convex and lower semicontinuous), but allow in counterpart dom g to be unbounded. The gain of weakening this assumption is emphasized by the applications considered in Section 6. The Fenchel dual problem to (P ) (see, for instance, [5, 6]) reads (D) sup f (A p) g ( p)}. () p R m We denote the optimal objective values of the optimization problems (P ) and (D) by v(p ) and v(d), respectively. The conjugate functions of f and g can be written as f (q) = sup x dom f q, x f(x)} = inf x dom f q, x + f(x)} q H 3

and g (p) = sup x dom g p, x g(x)} = inf p, x + g(x)} p x dom g Rm, respectively. According to [1, Theorem 11.9] and [4, Lemma.33], the optimization problems arising in the formulation of both f (q) for all q H and g (p) for all p R m are solvable, fact which implies that dom f = H and dom g = R m, respectively. By writing the dual problem (D) equivalently as the infimum optimization problem inf p R mf (A p) + g ( p)}, one can easily see that the Fenchel dual problem of the latter is sup f (x) g (Ax)}, x H which, by the Fenchel-Moreau Theorem, is nothing else than sup f(x) g(ax)}. x H In order to guarantee strong duality for this primal-dual pair it is sufficient to ensure that (see, for instance, [5, Theorem.1]) 0 ri(a (dom g )+dom f ). As f has full domain, this regularity condition is automatically fulfilled, which means that v(d) = v(p ) and the primal optimization problem (P ) has an optimal solution. Due to the fact that f and g are proper and A(dom f) dom g, this further implies v(d) = v(p ) R. Later we will assume that the dual problem (D) has an optimal solution, too, and that an upper bound of its norm is known. Denote by θ : R m R, θ(p) = f (A p) + g ( p), the objective function of (D). Hence, the dual can be equivalently written as (D) inf θ(p). (3) p Rm The assumptions made on g yield that p g ( p) is differentiable and has a Lipschitz continuous gradient (see Subsection 3.1 for details). However, since in general one can not guarantee the smoothness of p f (A p), the dual problem (D) is a nondifferentiable convex optimization problem. Our goal is to solve this problem efficiently and to obtain from here an optimal solution to (P ). As in ([8],) we are overcoming the nonsatisfactory complexity of subgradient-schemes, i. e. O 1, by making use of smoothing techniques introduced in [1 14]. More precisely, we regularize first the objective function of f (A p) by a quadratic term in order to obtain a smooth approximation of p f (A p). Then we apply a second regularization to the new dual objective and minimize the regularized problem via an appropriate fast gradient scheme (see [8]). This ( will ( allow )) us to solve both optimization problems (D) and (P ) approximately in O 1 ln 1 iterations. More than that, we will show that this rate of convergence can be improved when strengthening the assumptions imposed on f and g. 4

3 The double smoothing approach 3.1 First smoothing For a real number > 0 the function p f (A p) = sup x H A p, x f(x)} can be approximated by f (A p) = sup A p, x f(x) } x H x. (4) For each p R m the maximization problem which occurs in the formulation of f (A p) has a unique solution (see, for instance, [4, Lemma.33]), fact which implies that f (A p) R. For all p R m one can express the above regularization of the conjugate by means the Moreau envelope of f as follows f (A p) = sup A p, x f(x) } x x H = inf x H f(x) + A p x } A p ( = 1 A ) p f A p. Consequently, one can transfer the differentiability properties of the Moreau envelope (see [1, Proposition 1.9]) to p (f A )(p). For all p R m we have (f A )(p) = A ( 1 A ) p f thus AA p = A ( ( A )) p x f,p (f A )(p) = Ax f,p, AA p = Ax f,p, where x f,p H is the proximal point of 1 f at A p, namely the unique element in H fulfilling (see [1, Proposition 1.9]) 1 f ( A ) p = f(x f,p ) + A p x f,p By taking into account the nonexpansiveness of the proximal point mapping (see [1, Proposition 1.7]), for p, q R m it holds thus A (f A )(p) (f A )(q) = Ax f,p Ax f,q A x f,p x f,q A A p A q. A p q, is the Lipschitz constant of p (f A )(p). Coming now to the function p g ( p) = (g id)(p), let us notice first that, since g is proper, µ-strongly convex and lower semicontinous, g is differentiable and g is Lipschitz continuous with Lipschitz constant 1 µ (cf. [1, Theorem 18.15]). Thus 5

(g id) is Fréchet differentiable, too, and its gradient is Lipschitz continuous with Lipschitz constant 1 µ. By denoting x g,p := g ( p) = (g id)(p), one has that p g(x g,p ) or, equivalently, 0 ( p, + g)(x g,p ), which means that x g,p is the unique optimal solution (see [4, Lemma.33]) of the optimization problem inf x Rm p, x + g(x)}. Remark. If f is -strongly convex ( > 0), then there is no need to apply the first regularization for p f (A p), as this function is already Fréchet differentiable with a Lipschitz continuous gradient having a Lipschitz constant given by A. Indeed, the - strong convexity of f implies that f is Fréchet differentiable with Lipschitz continuous gradient having a Lipschitz constant given by 1 (see [1, Theorem 18.15]). Hence, for all p, q R m, we have Taking (f A )(p) (f A )(q) = A f (A p) A f (A q) A x f,p := f (A p), A p A q A p q. one has that 0 (f A p, )(x f,p ), which means that x f,p is the unique optimal solution (see [4, Lemma.33]) of the optimization problem inf f(x) x H A p, x }. By denoting D f := sup x } : x dom f R we can relate f A and its smooth approximation f A as follows. Proposition 3. For all p R m it holds Proof. For p R m one has f (A p) f (A p) f (A p) + D f. f (A p) = A p, x f,p f(x f,p ) x f,p A p, x f,p f(x f,p ) f (A p) sup A p, x f(x) } } x dom f x + sup x dom f x = f (A p) + D f. 6

For > 0 let θ : R m R be defined by θ (p) = f (A p) + g ( p). The function θ is differentiable with a Lipschitz continuous gradient θ (p) = (f A )(p) + (g id)(p) = Ax f,p x g,p p R m, having as Lipschitz constant L() := A + 1 µ. In consideration of Proposition 3 we get θ (p) θ(p) θ (p) + D f p R m. (5) In order to reconstruct an approximately optimal solution to the primal optimization problem (P ) it is not sufficient to ensure the convergence of θ( ) to v(d), but we also need good convergence properties for the decrease of θ ( ) (cf. [8, 10]). 3. Second smoothing In the following, a second regularization is applied to θ, as done in [8, 10], in order to make it strongly convex, fact which will allow us to use a fast gradient scheme with a good convergence rate for the decrease of θ ( ). Therefore, adding the strongly convex function to θ, for some positive real number, gives rise to the following regularization of the objective function θ, : R m R, θ, (p) := θ (p) + p = f (A p) + g ( p) + p, which is obviously -strongly convex. We further deal with the optimization problem inf θ,(p). (6) p R m By taking into account [4, Lemma.33], the optimization problem (6) has a unique optimal solution, while the function θ, is differentiable and for all p R m it holds θ, (p) = (θ ( ) + ) (p) = Ax f,p x g,p + p. This gradient is Lipschitz continuous with constant L(, ) := A + 1 µ +. Remark 4. If θ is -strongly convex, then there is no need to apply the second regularization, as this function is already endowed with the properties of θ,. 4 Solving the doubly regularized dual problem 4.1 A fast gradient method In the forthcoming sections we denote by p DS the unique optimal solution of the optimization problem (6) and by θ, := θ, (p DS ) its optimal objective value. Further, we denote by p R m an optimal solution to the dual optimization problem (D) and we assume that the upper bound p R (7) 7

is available for some nonzero R R +. Furthermore, as in [8,10], we make use of the following fast gradient method (see [11, Algorithm..11]) Initialization : For k 0 : Set w 0 = p 0 := 0 R m 1 Set p k+1 := w k L(, ) θ,(w k ). L(, ) Set w k+1 := p k+1 + (p k+1 p k ) L(, ) + (FGM) for minimizing the optimization problem (6), which has a strongly convex and differentiable optimization function with a Lipschitz continuous gradient. By taking into account [11, Theorem..3] we obtain a sequence (p k ) k 0 R m satisfying θ, (p k ) θ, (p DS) (θ, (0) θ, (p DS) + ) p DS e k L(,) (8) = (θ (0) θ (p DS)) e k L(,) k 0. (9) Since p DS solves (6), we have θ,(p DS ) = 0 and, therefore (see [11, Theorem.1.5]), θ, (p k ) (9) L(, )(θ (0) θ (p DS)) e k L(,) k 0. (10) Due to the -strong convexity of θ,, [11, Theorem.1.8] states p k p DS (θ,(p k ) θ, (p DS)) (9) (θ (0) θ (p DS)) e k L(,) k 0. (11) We first prove that the rates of convergence ( ( )) for the decrease of θ(p k ) θ(p ) and θ (p k ) coincide, being equal to O 1 ln 1, and that they can be improved when f and/or g fulfill additional assumptions. We also show how -optimal solutions to the primal problem (P ) can be recovered from the sequence of dual variables (p k ) k 0. To this aim we will act in the lines of the considerations from [8, 10] and this is why we refer the reader to these papers for detailed argumentations in this sense. 4. Convergence of θ(p k ) to θ(p ) Using again [11, Theorem.1.8] we obtain p DS (θ,(0) θ, (p DS)) = which implies that (θ (0) θ (p DS) p DS ), p DS 1 (θ (0) θ (p DS)). (1) 8

In order to estimate the function values, we notice that formula (9) states θ (p k ) θ (p DS) (θ (0) θ (p DS)) e k L(,) + The last term in the inequality above can be estimated via ( p DS p k ) k 0. p DS p k p DS p k ( p DS + p k p DS ) (11),(1) (θ (0) θ (p DS)) e k L(,) + (θ (0) θ (p DS)) e k + Thus we obtain for all k 0 ( θ (p k ) θ (p DS) (θ (0) θ (p DS)) (θ (0) θ (p DS)) e k e k ( + ) (θ (0) θ (p DS)) e k L(,) L(,). L(,) + (1 + ) e k L(,) ) L(,). (13) Further, we have θ (0) (5) θ(0), θ (p DS ) (5) θ(p DS ) D f θ(p ) D f and, from here, Hence, using (5), θ (0) θ (p DS) θ(0) θ(p ) + D f. (14) θ (p DS) θ (p DS) + p DS θ (p ) + p θ(p ) + p and from here it follows for all k 0 θ(p k ) θ(p ) D f + p + θ (p k ) θ (p DS) (13) D f + R + ( + ) (θ(0) θ(p ) + D f ) e k L(,). (15) Next we fix > 0. In order to get θ(p k ) θ(p ) after a certain amount of iterations k, we force all three terms in (15) to be less than or equal to 3. To this end we choose first := () = With these new parameters we can simplify (15) to and := () = 3D f 3R. (16) θ(p k ) θ(p ) 3 + ( + ( ) θ(0) θ(p ) + ) e k L(,) k 0, 3 9

thus, the second term in the expression on the right-hand side of the above estimate determines the number of iterations needed to obtain -accuracy for the dual objective function θ. Indeed, we have 3 ( + ( ) θ(0) θ(p ) + ) e k L(,) 3 k ( ( 3( + ) θ(0) θ(p L(, ) ln ) + 3) ) ( ( L(, ) 3( + ) θ(0) θ(p ) + )) 3 k ln. (17) Noticing that L(, ) = A + 1 µ + 1 = 1 ( 9 A D f R ) + 3R µ +, ( ( in order to obtain an approximately optimal solution to (D), we need k = O 1 ln 1 iterations. 4.3 Convergence of θ (p k ) to 0 Guaranteeing -optimality for the objective value of the dual is not sufficient for solving the primal optimization problem with a good convergence rate, as we need at least the same convergence rate for the decrease of θ (p k ) = Ax f,pk x g,pk to 0 in order to ensure primal feasibility. Within this section we show that this is actually the case (see also [10]). It holds θ (p k ) = θ, (p k ) p k θ, (p k ) + p k k 0. The first term on the right hand side above can be estimated using (10), namely θ, (p k ) L(, )(θ (0) θ (p DS )) k e L(,) k 0, while for the second term, we use Moreover, we notice that p k = p k p DS + p DS p k p DS + p DS (11) (θ (0) θ (p k DS )) e L(,) + p DS k 0. (18) θ(p ) + p (5) θ(p DS) D f + p DS θ(p ) D f + p DS, which implies that p DS p + D f. Hence, p DS p + D f (16) = p + 3 10 (16) = )) p + R (7) R, (19)

which, combined with the previous estimates, (14) and (16), provides for all k 0 ( θ (p k ) L(, ) + ) (θ (0) θ (p DS )) k e L(,) + R ( L(, ) + ) ( θ(0) θ(p ) + ) e k 3 L(,) + 3R. (0) For > 0 fixed, the first term in (0) decreases by the iteration counter k, and, in order to ensure θ (p k ) R, we need ( L(, L(, ) k ln 3R ) ) + (θ(0) θ(p ) + 3 ) (3 (1) ) iteration steps. Summarizing, by taking into account (16), we can ensure θ(p k ) θ(p ) and θ (p k ) R ( ( )) in k = O 1 ln 1 iterations. () 4.4 Constructing an approximate primal solution Since our main focus is to solve the primal optimization problem (P ), we prove as follows that the sequences (x f,pk ) k 0 dom f and (x g,pk ) k 0 dom g constructed in Subsection 3.1 contain all the information one needs to recover approximately optimal solutions to (P ) (see [8,10] for a similar approach). Let k := k() be the smallest index satisfying (17) and (1), thus guaranteeing (). Since θ (p k ) θ(p ) (5) θ(p k ) θ(p ) () and θ (p k ) θ(p ) (5) θ(p k ) D f θ(p ) (16) = θ(p k ) θ(p ) 3 3, it holds θ (p k ) θ(p ) for all k 0. Further, we have θ (p k ) = f (A p k ) + g ( p k ) = p k, Ax f,pk f(x f,pk ) x f,p k p k, x g,pk g(x g,pk ) and from here (notice that v(d) = θ(p )) f(x f,pk ) + g(x g,pk ) v(d) = p k, θ (p k ) + (θ(p ) θ (p k )) x f,p k k 0. It follows f(x f,pk ) + g(x g,pk ) v(d) p k θ (p k ) + θ(p ) θ (p k ) + x f,p k p k θ (p k ) + + D f (16) p k θ (p k ) + () R p k + k 0. 11

In the light of (18) and (19), it holds p k (16) 3 = R ( θ(0) θ(p ) + ) e k 3 ( θ(0) θ(p ) + ) e k 3 L(,) + R L(,) + R. Finally, we obtain ( f(x f,pk ) + g(x g,pk ) v(d) 3 θ(0) θ(p ) + ) e k L(,) + ( + ), 3 which, due to the choice of k = k(), fulfills f(x f,pk ) + g(x g,pk ) v(d) 5. (3) By taking into account weak duality, i. e. v(d) v(p ), we conclude that x f,pk dom f and x g,pk dom g can be seen as approximately optimal solutions to (P ). 4.5 Existence of an optimal solution We close this section by a convergence analysis on the two sequences of primal approximate optimal solutions when ε converges to zero. To this end let ( n ) n 0 R + be a decreasing sequence of positive scalars with lim n n = 0. For each n 0, the double smoothing algorithm (FGM) with smoothing parameters n and n given by (16) requires at least k = k( n ) iterations to fulfill (17) and (1). For n 0 we denote x n := x f,pk(n) dom f and y n := x g,pk(n) dom g. Due to the boundedness of dom f, its closure cl(dom f) is weakly compact (see [1, Theorem 3.3]) and there exists a subsequence (x nl ) l 0 and x H such that x nl weakly converges to x cl(dom f) when l +. Since A : H R m is linear and continuous, the sequence Ax nl will converge to Ax when l +. In view of relation () we get nl 0 Ax nl y nl l 0. (4) R This means that the sequence (y nl ) l 0 dom g is obviously bounded, hence there exists a subsequence of it (still denoted by (y nl ) l 0 ) and an element ȳ cl(dom g) such that y nl y when l +. Taking l + in (4) it follows Ax = y. Furthermore, due to (3), we have f(x nl ) + g(y nl ) v(d) + 5 nl l 0 and, by using the lower semicontinuity of f and g and [1, Theorem 9.1], we obtain } f(x) + g(ax) lim inf f(x nl ) + g(y nl ) l lim v(d) + 5 nl } = v(d) v(p ). l Since v(p ) R, we have x dom f and Ax dom g, which yields that x is an optimal solution to (P ). 1

5 Improving the convergence rates In this section we investigate how additional assumptions on the functions f and/or g influence the implementation of the double smoothing approach, its rate of convergence and eventually allow a weakening of the standing assumptions made in the paper. In all three situations addressed here the construction of the approximate primal solutions and the proof of the existence of an optimal solution to the primal problem can be made in analogy to the subsections 4.4 and 4.5, respectively. It is worth to notice that the additional assumptions furnish an improvement of the complexity, which is motivated by the fact that constants of strong convexity and/or Lipschitz constants of the gradient are already available, thus they do not need to be in the smoothing process constructed as functions of the level of accuracy ε. 5.1 The case f is strongly convex Additionally to the standing assumptions we assume first that the function f : H R is -strongly convex ( > 0), but remove the boundedness assumption on its domain. In this situation the first smoothing, as done in Subsection 3.1, can be omitted and the fast gradient method (FGM) can be applied to the minimization problem inf θ (p), (5) p R m where θ : R m R, θ := f (A p)+g ( p)+ p, with > 0, is a -strongly convex and differentiable function with Lipschitz continuous gradient. The Lipschitz constant of θ is L() := A + 1 µ +. This gives rise to a sequence (p k ) k 0 satisfying θ (p k ) θ (p DS) (8) (θ (0) θ (p DS) + p DS ) e k L() (6) = (θ(0) θ(p DS)) e k L() k 0, (7) where p DS follows denotes the unique optimal solution of the problem (5). Thus, from (7) it θ (p k ) L() (θ(0) θ(p DS)) e k L() (8) and p k p DS (θ (p k ) θ (p DS)) (θ(0) θ(p DS)) e k L() k 0. (9) Additionally, in all iterations k 0 we have and p DS 1 (θ(0) θ(p DS)) (30) p DS p k p k p DS ( p DS + p k p DS ) (9),(30) + (θ(0) θ(p DS)) e k L(), 13

thus θ(p k ) θ(p DS) (7) (θ(0) θ(p DS)) e k L() + ( p DS p k ) ( (θ(0) θ(p DS)) e k L() + (1 + ) e k ) L() ( + ) (θ(0) θ(p DS)) e k L() k 0. We denote by p R m an optimal solution to the dual optimization problem (D) and assume that the upper bound p R is available for some nonzero R R +. Thus, since θ(p DS ) θ (p DS ) θ (p ) = θ(p ) + p, we obtain for all k 0 θ(p k ) θ(p ) p + θ(p k ) θ(p DS) R + ( + ) (θ(0) θ(p )) e k L(). Hence, when ε > 0, in order to guarantee -accuracy for the dual objective function we can force both terms in the above estimate to be less than or equal to. Thus, by taking := () = R, this time we will need to this end, in contrast to (17), k L() ( ( + ) (θ(0) θ(p ) ln )), ( ( )) i. e. k = O 1 ln 1 iterations. Further, using (8) we have θ (p k ) On the other hand, using p k p k p DS + p DS (9) L() (θ(0) θ(p )) e k L() k 0. (θ(0) θ(p )) e k L() + p DS, and the relation θ(p ) + p DS θ (p DS ) θ (p ) = θ(p ) + p, which yields p DS p R, we obtain θ(p k ) θ (p k ) + p k ( = L() + ) (θ(0) θ(p )) e k ( L() + ) (θ(0) θ(p )) e k L() + R L() + R k 0. Therefore, in order to guarantee Ax f,pk x g,pk = θ(p k ) R, we need k = ( ( )) O 1 ln 1 iterations, which coincides with the convergence rate for the dual objective values. 14

5. The case g is everywhere differentiable with Lipschitz continuous gradient Throughout this subsection, additionally to the standing assumptions, we assume that g : R m R has full domain and it is differentiable with 1 -Lipschitz continuous gradient, for > 0. In this situation the second smoothing, as done in Subsection 3., can be omitted and the fast gradient method (FGM) can be applied to the minimization problem inf θ (p), (31) p R m where θ : R m R, θ := f (A p) + g ( p), is -strongly convex due to [1, Theorem 18.15] and differentiable with Lipschitz continuous gradient. The Lipschitz constant of θ is L() := A + 1 µ. This gives rise to a sequence (p k ) k 0 satisfying θ (p k ) θ (p DS) (θ (0) θ (p DS) + ) p DS e k L() (3) and (θ (0) θ (p DS) e k L() (33) θ (p k ) 4L() (θ (0) θ (p DS)) e k L() k 0, (34) where p DS denotes the unique optimal solution of the problem (31). We denote by p R m the unique optimal solution of the dual optimization problem (D) and would like to notice that in this context it is not necessary to know an upper bound of the norm of the dual optimal solution. Since θ (0) (5) θ(0) and θ (p DS ) (5) θ(p DS ) D f θ(p ) D f, we obtain θ (0) θ (p DS) θ(0) θ(p ) + D f. (35) On the other hand, since θ (p k ) θ (p DS ) (5) θ(p k ) D f θ(p ), it follows θ(p k ) θ(p ) D f + θ (p k ) θ (p DS) D f + (θ(0) θ(p ) + D f ) e k L() k 0. Hence, when ε > 0, in order to guarantee -optimality for the dual objective, we force both terms in the above estimate less than or equal to. By taking in contrast to (17), we need := () = ( ( L() 4 θ(0) θ(p k ln ) + ) ), D f, (36) 15

( ( )) i. e. k = O 1 ln 1 iterations to obtain -accuracy for the dual objective values. From (34) we obtain as well θ (p k ) L()(θ (0) θ (p DS )) k e L() (35) L()(θ(0) θ(p ) + D f ) e k L() ( (36) = L() θ(0) θ(p ) + ) e k L() k 0. Therefore, ( ( )) in order to guarantee Ax f,pk x g,pk = θ (p k ), we need k = O 1 ln 1 iterations, which is the same convergence rate as for the dual objective values. 5.3 The case f is strongly convex and g is everywhere differentiable with Lipschitz continuous gradient The third favorable situation which we address is when, additionally to the standing assumptions, the function f : H R is -strongly convex ( > 0) however, without assuming anymore that dom f is bounded, and the function g : R m R has full domain and it is differentiable with 1 -Lipschitz continuous gradient ( > 0). In this case both the first and second smoothing can be omitted and the fast gradient method (FGM) can be applied to the minimization problem inf θ(p), (37) p Rm where θ : R m R, θ := f (A p) + g ( p), is a -strongly convex and differentiable function with Lipschitz continuous gradient. The Lipschitz constant of θ is L := A + 1 µ. We denote by p R m the unique optimal solution of (D), for which it is not necessary to know an upper bound of its norm. This gives rise to a sequence (p k ) k 0 satisfying θ(p k ) θ(p ) (8) (θ(0) θ(p ) + ) p e k L (θ(0) θ(p )) e k L and From here, when ε > 0, we have while θ(p k ) 4L (θ(0) θ(p )) e k L k 0. (θ(0) θ(p )) e k L ( L (θ(0) θ(p ) k ln )), ( ) L(θ(0) θ(p )) e k L L(θ(0) θ(p L k ln )). In conclusion, in order to guarantee ε-accuracy ( ( )) for the dual objective values and for the decrease of θ( ) to 0, we need O ln 1 iterations. 16

6 Two examples in image processing In this section we are solving a linear inverse problem which arises in the field of signal and image processing via the double smoothing algorithm developed in this paper. For a given matrix A R n n describing a blur operator and a given vector b R n representing the blurred and noisy image the task is to estimate the unknown original image x R n fulfilling Ax = b. To this end we make use of two regularization functionals with different properties. 6.1 An l 1 regularization problem We start by solving the l 1 regularized convex optimization problem } (P ) inf Ax b + λ x x S 1, where S R n is an n-dimensional cube representing the range of the pixels and λ > 0 the regularization parameter. The problem to be solved can be equivalently written as (P ) inf f(x) + g(ax)}, x Rn for f : R n R, f(x) = λ x 1 + δ S (x) and g : R n R, g(y) = y b. Thus f is proper, convex and lower semicontinuous with bounded domain and g is a - strongly convex function with full domain, differentiable everywhere and with Lipschitz continuous gradient having as Lipschitz constant. This means that we are in the setting of Subsection 5.. By making use of gradient methods, both the iterative shrinkage-tresholding algorithm (ISTA) (see [9])( and ) its accelerated ( ) variant FISTA (see [, 3]) solve the optimization problem (P ) in O 1 and O 1 iterations, respectively, whereas the convergence ( ( )) rate of our method is O 1 ln 1. Since each pixel furnishes a greyscale value which is between 0 and 55, a natural choice for the convex set S would be the n-dimensional cube [0, 55] n R n. In order to reduce the Lipschitz constant which appears in the developed approach, we scale the pictures to which [ ] refer within this subsection such that each of their pixels ranges in the interval 0, 1 10. We concretely look at the 56 56 cameraman test image, which is part of the image processing toolbox in Matlab. The dimension of the vectorized and scaled cameraman test image is n = 56 = 65536. By making use of the Matlab functions imfilter and fspecial, this image is blurred as follows: 1 H=f s p e c i a l ( gaussian, 9, 4 ) ; % gaussian blur o f s i z e 9 times 9 % and standard d e v i a t i o n 4 3 B=i m f i l t e r (X,H, conv, symmetric ) ; % B=observed b l u r r e d image 4 % X=o r i g i n a l image In row 1 the function fspecial returns a rotationally symmetric Gaussian lowpass filter of size 9 9 with standard deviation 4. The entries of H are nonnegative and their sum 17

adds up to 1. In row 3 the function imfilter convolves the filter H with the image X R 56 56 and outputs the blurred image B R 56 56. The boundary option "symmetric" corresponds to reflexive boundary conditions. Thanks to the rotationally symmetric filter H, the linear operator A R n n given by the Matlab function imfilter is symmetric, too. By making use of the real spectral decomposition of A, it shows that A = 1. After adding a zero-mean white Gaussian noise with standard deviation 10 4, we obtain the blurred and noisy image b R n which is shown in Figure 6.1. original blurred and noisy Figure 6.1: The 56 56 cameraman test image The dual optimization problem in minimization form is (D) inf p R n f (A p) + g ( p)} and, due to the fact that g has full domain, strong duality for (P ) and (D) holds, i. e. v(p ) = v(d) and (D) has an optimal solution (see, for instance, [5, 6]). By taking into consideration (36), the smoothing parameter is taken as := D f (38) for D f = sup x [ ] n } : x 0, 1 10 = 37.68, while the accuracy is chosen to be = 0.3 and the regularization parameter is set to λ = e-6. We show next that the sequences of approximate primal solutions (x f,pk ) k 0 and (x g,pk ) k 0 can be easily calculated. Indeed, for k 0 we have x f,pk = arg min x [0, 10] 1 n = arg min x [0, 1 10] n λ x 1 + A p k } x n [ λ x i + ( (A ) p k ) ]} i x i i=1 and, in order to determine it, we need to solve the one-dimensional convex optimization 18

problem inf x i [0, 1 10] λx i + ( (A ) p k ) i x i }, for i = 1,..., n, which has as unique optimal solution P [0, 1 10] x f,pk = P [0, 1 10] n On the other hand, for all k 0 we have x g,pk = arg min x R n ( 1 (A p k λ1 n )). ( 1 ((A p k ) i λ)). Thus, p k, x + g(x)} = arg min p k, x + x b } = b 1 x R n p k. ISTA 50 = 1.31469e 0 FISTA 50 = 7.096089e 03 DS 50 = 8.050151e 03 ISTA 100 = 9.689755e 03 FISTA 100 = 6.633611e 03 DS 100 = 6.75533e 03 Figure 6.: Iterations of ISTA, FISTA and double smoothing (DS) for solving (P ) Figure 6. shows the iterations 50 and 100 of ISTA, FISTA and the double smoothing (DS) approach. The objective function values at iteration k are denoted by ISTA k, FISTA k and, respectively, DS k (e. g. DS k := f(x f,pk ) + g(ax f,pk )). All in all, the visual quality of the restored cameraman image after 100 iterations, when using FISTA or DS, is quite comparable, whereas the recovered image by ISTA is still blurry. However, a valuable tool for measuring the quality of these images is the so-called improvement in 19

signal-to-noise ratio (ISNR), which is defined as ISNR(k) = 10 log 10 ( x b x x k where x, b and x k denote the original, observed and estimated image at iteration k, respectively. Figure 6.3 shows the evolution of the ISNR values when using DS, FISTA and ISTA to solve (P ). ) 8 7 DS FISTA ISTA 6 5 ISNR 4 3 1 0 0 10 0 30 40 50 60 70 80 90 100 Iterations k Figure 6.3: Improvement in signal-to-noise ratio (ISNR) 6. An l l 1 regularization problem The second convex optimization problem we solve is } (P ) inf Ax b + λ( x + x x S 1 ), where S R n is the n-dimensional cube [0, 1] n representing the pixel range, λ > 0 the regularization parameter and + 1 the regularization functional, already used in [7]. The problem to be solved can be equivalently written as (P ) inf f(x) + g(ax)}, x Rn for f : R n R, f(x) = λ( x + x 1 ) + δ S (x) and g : R n R, g(y) = y b. Thus f is proper, λ-strongly convex and lower semicontinuous with bounded domain and g is a -strongly convex function with full domain, differentiable everywhere and with Lipschitz continuous gradient having as Lipschitz constant. This time we are in the setting of the Subsection 5.3, the Lipschitz constant of the gradient of θ : R n R, θ(p) = f (A p)+g ( p), being L = 1 λ + 1. By applying the double smoothing approach ( ( )) one obtains a rate of convergence of O ln 1 for solving (P ). In this example we take a look at the blobs test image shown in Figure 6.4 which is also part of the image processing toolbox in Matlab. The picture undergoes the 0

original blurred and noisy Figure 6.4: The 7 39 blobs test image same blur as described in the previous section. Since our pixel range has changed, we now use additive zero-mean white Gaussian noise with standard deviation 10 3 and the regularization parameter is changed to λ = e-5. We calculate next the sequences of approximate primal solutions (x f,pk ) k 0 and (x g,pk ) k 0. Indeed, for k 0 we have and } x f,pk = arg min x [0,1] n λ x + λ x 1 A p k, x n = arg min i=1,...,n x i [0,1] x g,pk = arg min x R n i=1 [ ] } ( ) 1 (A p k ) i x i + λx i + λx i = P [0,1] n λ (A p k λ1 n ). p k, x + g(x)} = arg min p k, x + x b } = b 1 x R n p k. Figure 6.5 shows the iterations 50 and 100 of ISTA, FISTA and the double smoothing (DS) technique together with the corresponding function values denoted by ISTA k, FISTA k or DS k. As before, the function values of FISTA are slightly lower than those of DS, while ISTA is far behind these methods, not only from theoretical point of view, but also as it can be detected visually. Figure 6.6 displays the improvement in signalto-noise ration for ISTA, FISTA and DS and it shows that DS outperforms the other two methods from the point of view of the quality of the reconstruction. 7 Conclusions In this article we investigate the possibilities of accelerating the double smoothing technique when solving unconstrained nondifferentiable convex optimization problems. This method, which assumes the minimization of the doubly regularized Fenchel dual objective, allows in( the ( most )) general case to reconstruct an approximately optimal primal solution in O 1 ln 1 iterations. We show that under some appropriate assumptions for the functions involved in( the formulation ( )) of the problem ( ( to )) be solved this convergence rate can be improved to O 1 ln 1, or even to O ln 1. 1

ISTA 50 = 7.16997e+00 FISTA 50 = 7.4744e 01 DS 50 = 8.00883e 01 ISTA 100 = 3.64951e+00 FISTA 100 = 6.0411e 01 DS 100 = 6.33950e 01 Figure 6.5: Iterations of ISTA, FISTA and double smoothing (DS) for solving (P ) References [1] H.H. Bauschke and P.L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics, Springer, 011. [] A. Beck and M. Teboulle. A fast iterative shrinkage-tresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, (1):183 0, 009. [3] A. Beck and M. Teboulle. Gradient-based algorithms with applications to signal recovery problems. In: Y. Eldar and D. Palomar (eds.), Convex Optimization in Signal Processing and Communications, pp. 33 88. Cambribge University Press, 010. [4] J.F. Bonnans and A. Shapiro. Perturbation Analysis of Optimization Problems. Springer Series in Operations Research and Financial Engineering, 000. [5] R.I. Boţ. Conjugate Duality in Convex Optimization. Lecture Notes in Economics and Mathematical Systems, Vol. 637, Springer-Verlag Berlin Heidelberg, 010. [6] R.I. Boţ, S.-M. Grad and G. Wanka. Duality in Vector Optimization. Springer- Verlag Berlin Heidelberg, 009. [7] R.I. Boţ and T. Hein. Iterative regularization with general penalty term theory and application to L 1 - and T V -regularization. Inverse Problems 8(10): 10410 (pp19), 01.

0 18 16 14 1 ISNR 10 8 6 4 DS FISTA ISTA 0 0 10 0 30 40 50 60 70 80 90 100 Iterations k Figure 6.6: Improvement in signal-to-noise ratio (ISNR) [8] R.I. Boţ and C. Hendrich. A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems. arxiv:103.070v1 [math.oc], 01. [9] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11):1413 1457, 004. [10] O. Devolder, F. Glineur and Y. Nesterov. Double Smoothing Technique for Large- Scale Linearly Constrained Convex Optimization. SIAM Journal on Optimization, ():70 77, 01. [11] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 004. [1] Y. Nesterov. Excessive gap technique in nonsmooth convex optimization. SIAM Journal of Optimization, 16(1):35 49, 005. [13] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):17 15, 005. [14] Y. Nesterov. Smoothing technique and its applications in semidefinite optimization. Mathematical Programming, 110():45 59, 005. 3