Gradient methods for minimizing composite functions

Similar documents
Gradient methods for minimizing composite functions

Accelerating the cubic regularization of Newton s method on convex problems

Cubic regularization of Newton s method for convex problems with constraints

Primal-dual subgradient methods for convex problems

Lecture 3: Huge-scale optimization problems

Universal Gradient Methods for Convex Optimization Problems

Complexity bounds for primal-dual methods minimizing the model of objective function

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Nonsymmetric potential-reduction methods for general cones

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

CORE 50 YEARS OF DISCUSSION PAPERS. Globally Convergent Second-order Schemes for Minimizing Twicedifferentiable 2016/28

Subgradient methods for huge-scale optimization problems

12. Interior-point methods

Conditional Gradient (Frank-Wolfe) Method

Smooth minimization of non-smooth functions

Selected Methods for Modern Optimization in Data Analysis Department of Statistics and Operations Research UNC-Chapel Hill Fall 2018

Optimization methods

Iteration-complexity of first-order penalty methods for convex programming

Unconstrained minimization of smooth functions

An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and its Implications to Second-Order Methods

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Algorithms for Nonsmooth Optimization

Subgradient methods for huge-scale optimization problems

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Math 273a: Optimization Subgradients of convex functions

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Generalized Uniformly Optimal Methods for Nonlinear Programming

Optimization methods

5. Subgradient method

Constructing self-concordant barriers for convex cones

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

Convex Optimization and l 1 -minimization

An Adaptive Accelerated Proximal Gradient Method and its Homotopy Continuation for Sparse Optimization

10. Unconstrained minimization

Kaisa Joki Adil M. Bagirov Napsu Karmitsa Marko M. Mäkelä. New Proximal Bundle Method for Nonsmooth DC Optimization

Equality constrained minimization

On Acceleration with Noise-Corrupted Gradients. + m k 1 (x). By the definition of Bregman divergence:

An Adaptive Accelerated Proximal Gradient Method and its Homotopy Continuation for Sparse Optimization

Convex Optimization. Problem set 2. Due Monday April 26th

Accelerated Block-Coordinate Relaxation for Regularized Optimization

Convex Optimization Lecture 16

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

arxiv: v1 [math.oc] 5 Dec 2014

Douglas-Rachford splitting for nonconvex feasibility problems

A Unified Approach to Proximal Algorithms using Bregman Distance

Math 273a: Optimization Subgradient Methods

Barrier Method. Javier Peña Convex Optimization /36-725

CPSC 540: Machine Learning

Nonlinear Optimization for Optimal Control

Accelerated Proximal Gradient Methods for Convex Optimization

3.10 Lagrangian relaxation

Homework 4. Convex Optimization /36-725

arxiv: v4 [math.oc] 1 Apr 2018

A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

Pavel Dvurechensky Alexander Gasnikov Alexander Tiurin. July 26, 2017

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

Solving Dual Problems

Accelerated primal-dual methods for linearly constrained convex problems

DISCUSSION PAPER 2011/70. Stochastic first order methods in smooth convex optimization. Olivier Devolder

Stochastic and online algorithms

Coordinate Descent and Ascent Methods

Optimization and Optimal Control in Banach Spaces

arxiv: v1 [math.oc] 1 Jul 2016

CONVERGENCE PROPERTIES OF COMBINED RELAXATION METHODS

The Proximal Gradient Method

Gradient Sliding for Composite Optimization

One Mirror Descent Algorithm for Convex Constrained Optimization Problems with Non-Standard Growth Properties

Lagrange Relaxation and Duality

On the interior of the simplex, we have the Hessian of d(x), Hd(x) is diagonal with ith. µd(w) + w T c. minimize. subject to w T 1 = 1,

Lecture 16: October 22

Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming

ACCELERATED FIRST-ORDER PRIMAL-DUAL PROXIMAL METHODS FOR LINEARLY CONSTRAINED COMPOSITE CONVEX PROGRAMMING

Interior-Point Methods for Linear Optimization

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Complexity and Simplicity of Optimization Problems

Self-Concordant Barrier Functions for Convex Optimization

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Lecture 23: Conditional Gradient Method

A NEW ITERATIVE METHOD FOR THE SPLIT COMMON FIXED POINT PROBLEM IN HILBERT SPACES. Fenghui Wang

Worst Case Complexity of Direct Search

Accelerating Nesterov s Method for Strongly Convex Functions

Line Search Methods for Unconstrained Optimisation

Applications of Linear Programming

Newton Method with Adaptive Step-Size for Under-Determined Systems of Equations

An accelerated non-euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems

A Second-Order Method for Strongly Convex l 1 -Regularization Problems

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Written Examination

The Frank-Wolfe Algorithm:

Optimization for Machine Learning

6. Proximal gradient method

Proximal and First-Order Methods for Convex Optimization

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

FIXED POINT ITERATIONS

On Nesterov s Random Coordinate Descent Algorithms - Continued

Transcription:

Gradient methods for minimizing composite functions Yu. Nesterov May 00 Abstract In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, ) we consider primal and dual variants of the gradient method with convergence rate O k ), and an accelerated multistep version with convergence rate O ), where k is the iteration counter. For nonconvex problems with k this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient line search procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme. Keywords: Local Optimization, Convex Optimization, Nonsmooth optimization, Complexity theory, Black-box model, Optimal methods, Structural Optimization, l -regularization. Dedicated to Claude Lemaréchal on the Occasion of his 65th Birthday Center for Operations Research and Econometrics CORE), Catholic University of Louvain UCL), 34 voie du Roman Pays, 348 Louvain-la-Neuve, Belgium; e-mail: nesterov@core.ucl.ac.be. The author acknowledges the support from Office of Naval Research grant # N00040804: Efficiently Computable Compressed Sensing.

Introduction Motivation. In recent years, several advances in Convex Optimization have been based on development of different models for optimization problems. Starting from the theory of self-concordant functions 4], it was becoming more and more clear that the proper use of the problem s structure can lead to very efficient optimization methods, which significantly overcome the limitations of black-box Complexity Theory see Section 4. in 9] for discussion). For more recent examples, we can mention the development of smoothing technique 0], or the special methods for minimizing convex objective function up to certain relative accuracy ]. In both cases, the proposed optimization schemes strongly exploit the particular structure of corresponding optimization problem. In this paper, we develop new optimization methods for approximating global and local minima of composite convex objective functions φx). Namely, we assume that φx) = fx) + Ψx),.) where fx) is a differentiable function defined by a black-box oracle, and Ψx) is a general closed convex function. However, we assume that function Ψx) is simple. This means that we are able to find a closed-form solution for minimizing the sum of Ψ with some simple auxiliary functions. Let us give several examples.. Constrained minimization. Let Q be a closed convex set. Define Ψ as an indicator function of the set Q: Ψx) = { 0, if x Q, +, otherwise. Then, the unconstrained minimization of composite function.) is equivalent to minimizing the function f over the set Q. We will see that our assumption on simplicity of the function Ψ reduces to the ability of finding a closed form Euclidean projection of an arbitrary point onto the set Q.. Barrier representation of feasible set. Assume that the objective function of the convex constrained minimization problem find f = min x Q fx) is given by a black-box oracle, but the feasible set Q is described by a ν-self-concordant barrier F x) 4]. Define Ψx) = ɛ ν F x), φx) = fx) + Ψx), and x = arg min fx). x Q Then, for arbitrary ˆx int Q, by general properties of self-concordant barriers we get fˆx) fx ) + φˆx), ˆx x + ɛ ν F ˆx), x ˆx f + φˆx) ˆx x + ɛ. Thus, a point ˆx, with small norm of the gradient of function φ, approximates well the solution of the constrained minimization problem. Note that the objective function φ does not belong to any standard class of convex problems formed by functions with bounded derivatives of certain degree.

3. Sparse least squares. In many applications, it is necessary to minimize the following objective: φx) = Ax b + x def = fx) + Ψx),.) where A is a matrix of corresponding dimension and k denotes the standard l k -norm. The presence of additive l -term very often increases the sparsity of the optimal solution see, 8]). This feature was observed a long time ago see, for example,, 6, 6, 7]). Recently, this technique became popular in signal processing and statistics 7, 9]. ) From the formal point of view, the objective φx) in.) is a nonsmooth convex function. Hence, the standard black-box gradient schemes need O ) iterations for generating ɛ its ɛ-solution. The structural methods based on the smoothing technique 0] need O ɛ ) iterations. However, we will see that the same problem can be solved in O ) iterations ɛ / of a special gradient-type scheme. Contents. In Section we study the problem of finding a local minimum of a nonsmooth nonconvex function. First, we show that for general nonsmooth, nonconvex functions, even resolving the question of whether there exists a descent direction from a point is NPhard. However, for the special form of the objective function.), we can introduce the composite gradient mapping, which makes the above mentioned problem solvable. The objective function of the auxiliary problem is formed as a sum of the objective of the usual gradient mapping 8] and the general nonsmooth convex term Ψ. For the particular case.), this construction was proposed in 0]. ) We present different properties of this object, which are important for complexity analysis of optimization methods. In Section 3 we study the behavior of the simplest gradient scheme based on the composite gradient mapping. We prove that in convex and nonconvex cases we have exactly the same complexity results as in the usual smooth situation Ψ 0). For example, for nonconvex f, the maximal negative slope of φ along the minimization ) sequence with monotonically decreasing function values increases as O, where k is the iteration k / counter. Thus, the limiting points have no decent directions see Theorem 3). If f is convex and has Lipschitz continuous gradient, then the Gradient Method converges as O k ). It is important that our version of the Gradient Method has an adjustable stepsize strategy, which needs on average one additional computation of the function value per iteration. In Section 4, we introduce a machinery of estimate sequences and apply it first for justifying the rate of convergence of the dual variant of the gradient method. Afterwards, we present an accelerated version, which converges as O k ). As compared with the previous variants of accelerated schemes e.g. 9], 0]), our new scheme can efficiently adjust the initial estimate of the unknown Lipschitz constant. In Section 5 we give examples of applications of the accelerated scheme. We show how to minimize functions with known strong convexity parameter Section 5.), how to find a point with a small residual in the system of the first-order optimality conditions Section 5.), and how to approximate ) An interested reader can find a good survey of the literature, existing minimization techniques, and new methods in 3] and 5]. ) However, this idea has much longer history. To the best of our knowledge, for the general framework this technique was originally developed in 4].

unknown parameters of strong convexity Section 5.3). In Section 6 we present the results of preliminary testing of the proposed optimization methods. An earlier version of the results was published in the research report ]. Notation. In what follows, E denotes a finite-dimensional real vector space, and E the dual space, which is formed by all linear functions on E. The value of function s E at x E is denoted by s, x. By fixing a positive definite self-adjoint operator B : E E, we can define the following Euclidean norms: 3) h = Bh, h /, h E, s = s, B s /, s E..3) In the particular case of coordinate vector space E = R n, we have E = E. Then, usually B is taken as a unit matrix, and s, x denotes the standard coordinate-wise inner product. Further, for function fx), x E, we denote by fx) its gradient at x: fx + h) = fx) + fx), h + o h ), h E. Clearly fx) E. For a convex function Ψ we denote by Ψx) its subdifferential at x. Finally, the directional derivative of function φ is defined in the usual way: Dφy)u] = lim α 0 α φy + αu) φy)]. Finally, we use notation a #.#) b for indicating that the inequality a b is an immediate consequence of the displayed relation #.#). Composite gradient mapping The problem of finding a descent direction for a nonsmooth nonconvex function or proving that this is not possible) is one of the most difficult problems in Numerical Analysis. In order to see this, let us fix an arbitrary integer vector c Z+ n and denote γ = n c i). Consider the function φx) = ) γ max in xi) min in xi) + c, x..) Clearly, φ is a piece-wise linear function with φ0) = 0. Nevertheless, we have the following discouraging result. Lemma It is NP-hard to decide if there exists x R n with φx) < 0. Proof: Assume that some vector σ R n with coefficients ± satisfies equation c, σ = 0. Then φσ) = γ < 0. 3) In this paper, for the sake of simplicity, we restrict ourselves to Euclidean norms only. The extension onto the general case can be done in a standard way using Bregman distances e.g. 0]). 3 i=

Let us assume now that φx) < 0 for some x R n. We can always choose x with max in xi) =. Denote δ = c, x. In view of our assumption, we have x i).) > γ + δ, i =,..., n. Denoting σ i) = sign x i), i =,..., n, we can rewrite this inequality as σ i) x i) > γ +δ. Therefore, σ i) x i) = σ i) x i) < γ δ, and we conclude that c, σ c, x + c, σ x δ + γ max in σi) x i) < γ)δ +. Since c has integer coefficients, this is possible if and only if c, σ = 0. It is well known that verification of solvability of the latter equality in boolean variables is NP-hard this is the standard Boolean Knapsack Problem). Thus, we have shown that finding a descent direction of function φ is NP-hard. Considering now the function max{, φx)}, we can see that finding a local) minimum of a unimodular nonsmooth nonconvex function is also NP-hard. Thus, in a sharp contrast to Smooth Minimization, for nonsmooth functions even the local improvement of the objective is difficult. Therefore, in this paper we restrict ourselves to objective functions of very special structure. Namely, we consider the problem of minimizing def φx) = fx) + Ψx).) over a convex set Q, where function f is differentiable, and function Ψ is closed and convex on Q. For characterizing a solution to our problem, define the cone of feasible directions and the corresponding dual cone, which is called normal: Fy) = {u = τ x y), x Q, τ 0} E, N y) = {s : s, x y 0, x Q} E, y Q. Then, the first-order necessary optimality conditions at the point of local minimum x can be written as follows: φ def = fx ) + ξ N x ),.3) where ξ Ψx ). In other words, φ, u 0 u Fx )..4) Since Ψ is convex, the latter condition is equivalent to the following: Dφx )u] 0 u Fx )..5) Note that in the case of convex f, any of the conditions.3) -.5) is sufficient for point x to be a point of global minimum of function φ over Q. The last variant of the first-order optimality conditions is convenient for defining an approximate solution to our problem. 4

Definition The point x Q satisfies the first-order optimality conditions of local minimum of function φ over the set Q with accuracy ɛ 0 if Dφ x)u] ɛ u F x), u =..6) Note that in the case F x) = E with 0 / f x) + Ψ x), this condition reduces to the following inequality: ɛ min u = Dφ x)u] = = min u min u = max f x) + ξ, u = max ξ Ψ x) = min ξ Ψ x) f x) + ξ. max f x) + ξ, u ξ Ψ x) ξ Ψ x) min f x) + ξ, u u For finding a point x satisfying condition.6), we will use the composite gradient mapping. Namely, at any y Q define m L y; x) = fy) + fy), x y + L x y + Ψx), T L y) = arg min x Q m Ly; x),.7) where L is a positive constant. Recall that in the usual gradient mapping 8] we have Ψ ) 0 our modification is inspired by 0]). Then, we can define a constrained analogue of the gradient direction for a smooth function, the vector g L y) = L By T L y)) E,.8) where the operator B 0 defines the norm.3). In case of ambiguity of the objective function, we use notation g L y)φ].) It is easy to see that for Q E and Ψ 0 we get g L y) = φy) fx) for any L > 0. Our assumption on simplicity of function Ψ means exactly the feasibility of operation.7). Let us mention the main properties of the composite gradient mapping. Almost all of them follow from the first-order optimality condition for problem.7): fy) + LBT L y) y) + ξ L y), x T L y) 0, x Q,.9) where ξ L y) ΨT L y)). In what follows, we denote φ T L y)) = ft L y)) + ξ L y) φt L y))..0) We are going to show that the above subgradient inherits all important properties of the gradient of a smooth convex function. From now on, we assume that the first part of the objective function.) has Lipschitzcontinuous gradient: fx) fy) L f x y, x, y Q,.) 5

From.) and convexity of Q, one can easily derive the following useful inequality see, for example, 5]): fx) fy) fy), x y L f x y, x, y Q..) First of all, let us estimate a local variation of function φ. Denote Theorem At any y Q, Moreover, for any x Q, we have φ T L y)), x T L y) S L y) = ft Ly)) fy) T L y) y L f. φy) φt L y)) L L f L g L y),.3) φ T L y)), y T L y) L L f L g L y)..4) ) + L S Ly) g L y) T L y) x + L f L ) g L y) T L y) x. Proof: For the sake of notation, denote T = T L y) and ξ = ξ L y). Then.5) φt ).).9), x=y fy) + fy), T y + L f T y + ΨT ) fy) + LBT y) + ξ, y T + L f T y + ΨT ) = fy) + L f L T y + ΨT ) + ξ, y T φy) L L f T y. Taking into account the definition.8), we get.3). Further, ft ) + ξ, y T = fy) + ξ, y T ft ) fy), T y Thus, we get.4). Finally,.9), x=y LBy T ), y T ft ) fy), T y.) L L f ) T y.8) = L L f L g L y). ft ) + ξ, T x.9) ft ), T x + fy) + LBT y), x T = ft ) fy), T x g L y), x T.8) ) + L S Ly) g L y) T x, 6

and.5) follows. Corollary For any y Q, and any u FT L y)), u =, we have DφT L y))u] + L f L ) g L y)..6) In this respect, it is interesting to investigate the dependence of g L y) in L. Lemma The norm of the gradient direction g L y) is increasing in L, and the norm of the step T L y) y is decreasing in L. Proof: Indeed, consider the function ] ωτ) = min fy) + fy), x y + x Q τ x y + Ψx). The objective function of this minimization problem is jointly convex in x and τ. Therefore, ωτ) is convex in τ. Since the minimum of this problem is attained at a single point, ωτ) is differentiable and ω τ) = τ T /τ y) y] = g /τ y). Since ω ) is convex, ω τ) is an increasing function of τ. Hence, g /τ y) is a decreasing function of τ. The second statement follows from the concavity of the function ] ˆωL) = min fy) + fy), x y + L x Q x y + Ψx). Now let us look at the output of the composite gradient mapping from a global perspective. Theorem For any y Q we have m L y; T L y)) φy) L g Ly),.7) m L y; T L y)) min φx) + L+L f x Q x y ]..8) If function f is convex, then m L y; T L y)) min φx) + L x Q x y ]..9) 7

Proof: Note that function m L y; x) is strongly convex in x with convexity parameter L. Hence, φy) m L y; T L y)) = m L y; y) m L y; T L y)) L y T Ly) = L g Ly). Further, if f is convex, then m L y; T L y)) = min x Q ] fy) + fy), x y + L x y + Ψx) min fx) + Ψx) + L x Q x y ] = min φx) + L x Q x y ]. For nonconvex f, we can plug into the same reasoning the following consequence of.): fy) + fy), x y fx) + L f x y. Remark In view of.), for L L f we have Hence, in this case inequality.9) guarantees φt L y)) m L y; T L y))..0) φt L y)) min x Q Finally, let us prove a useful inequality for strongly convex φ. φx) + L x y ]..) Lemma 3 Let function φ be strongly convex with convexity parameter µ φ > 0. Then for any y Q we have ) T L y) x µ φ + L S Ly) g L y) µ φ + L ) f L g L y),.) where x is a unique minimizer of φ on Q. Proof: Indeed, in view of inequality.5), we have: + L ) ) f L g L y) T L y) x + L S Ly) g L y) T L y) x φ T L y)), T L y) x µ φ T L y) x, and.) follows. Now we are ready to analyze different optimization schemes based on the composite gradient mapping. In the next section, we describe the simplest one. 8

3 Gradient method Define first the gradient iteration with the simplest backtracking strategy for the line search parameter we call its termination condition the full relaxation). Gradient Iteration Gx, M) Set: L := M. Repeat: T := T L x), if φt ) > m L x; T ) then L := L γ u, 3.) Until: φt ) m L x; T ). Output: Gx, M).T = T, Gx, M).L = L, Gx, M).S = S L x). If there is an ambiguity in the objective function, we use notation G φ x, M). For running the gradient scheme, we need to choose an initial optimistic estimate L 0 for the Lipschitz constant L f : 0 < L 0 L f, 3.) and two adjustment parameters γ u > and γ d. Let y 0 Q be our starting point. For k 0, consider the following iterative process. Gradient Method GMy 0, L 0 ) y k+ = Gy k, L k ).T, M k = Gy k, L k ).L, 3.3) L k+ = max{l 0, M k /γ d }. Thus, y k+ = T Mk y k ). Since function f satisfies inequality.), in the loop 3.), the value L can keep increasing only if L L f. Taking into account condition 3.), we obtain the following bounds: L 0 L k M k γ u L f. 3.4) 9

Moreover, if γ d γ u, then L k L f, k 0. 3.5) Note that in 3.) there is no explicit bound on the number of repetition of the loop. However, it is easy to see that the total amount of calls of oracle N k after k iterations of 3.3) cannot be too big. Lemma 4 In the method 3.3), for any k 0 we have N k ] + ln γ d ln γ u k + ) + ln γ u ln γul f γ d L 0 )+. 3.6) Proof: Denote by n i the number of calls of the oracle at iteration i 0. Then Thus, Hence, we can estimate N k = k n i = i=0 L i+ γ d L i γ n i u. n i + ln γ d ln γ u + ln γ u ln L i+ L i. ] + ln γ d ln γ u k + ) + ln γ u ln L k+ L 0. 3.4) { In remains to note that L k+ max L 0, γ u γd L f }. A reasonable choice of the adjustment parameters is as follows: γ u = γ d = 3.6) N k k + ) + log L f L 0, L k 3.5) L f. 3.7) Thus, the performance of the Gradient Method 3.3) is well described by the estimates for the iteration counter, Therefore, in the rest of this section we will focus on estimating the rate of convergence of this method in different situations. Let us start from the general nonconvex case. Denote δ k = min 0ik M i g Mi y i ), i k = + arg min 0ik M i g Mi y i ). Theorem 3 Let function φ be bounded below on Q by some constant φ. Then Moreover, for any u Fy ik ) with u = we have δ k φy 0) φ k+. 3.8) Dφy ik )u] +γu)l f L / 0 φy0 ) φ ) k+. 3.9) 0

Proof: Indeed, in view of the termination criterion in 3.), we have φy i ) φy i+ ) φy i ) m Mi y i ; T Mi y i )).7) M i g Mi y i ). Summing up these inequalities for i = 0,..., k, we obtain 3.8). Denote j k = i k. Since y ik = T Mjk y jk ), for any u Fy ik ) with u = we have Dφy ik )u].6) + L ) f M jk g Mjk y jk ) = + L ) f M jk M jk δ k 3.8) M j k +L f φy0 ) φ ) M / k+ j k 3.4) +γu)l f L / 0 φy0 ) φ ) k+. Let us describe now the behavior of the Gradient Method 3.3) in the convex case. Theorem 4 Let function f be convex on Q. Assume that it attains a minimum on Q at point x and that the level sets of φ are bounded: y x R y Q : φy) φy 0 ). 3.0) If φy 0 ) φx ) γ u L f R, then φy ) φx ) γ ul f R. Otherwise, for any k 0 we have φy k ) φx ) γul f R k+. 3.) Moreover, for any u Fy ik ) with u = we have Dφy ik )u] 4+γu)L f R L γ f k+)k+4] / u L 0. 3.) Proof: Since φy k+ ) φy k ) for all k 0, we have the bound y k x R valid for all generated points. Consider Then, y k α) = αx + α)y k Q α 0, ]. φy k+ ) m Mk y k ; T Mk y k )).9) min φy) + M k y Q y y k ] y = y k α)) min φαx + α)y k ) + M kα 0α y k x ] 3.4) min φy k ) αφy k ) φx )) + γul f R 0α α ].

If φy 0 ) φx ) γ u L f R, then the optimal solution of the latter optimization problem is α = and we get φy ) φx ) γ ul f R. Otherwise, the optimal solution is and we obtain α = φy k) φx ) γ u L f R φy 0) φx ) γ u L f R, φy k+ ) φy k ) φy k) φx )] γ ul f R. 3.3) From this inequality, denoting λ k = φy k ) φx ), we get Hence, for k 0 we have λ k+ λ k + λ k+ λ k γ u L f R λ k + γ u L f R. λ k φy 0 ) φx ) + k γ u L f k+ R γ u L f. R Further, let us fix an integer m, 0 < m < k. Since we have φy i ) φy i+ ) M i g Mi y i ), i = 0,..., k, k m + )δ k k i=m M i g Mi y i ) φy m ) φy k+ ) φy m ) φx ) 3.) γul f R m+. Denote j k = i k. Then, for any u Fy ik ) with u =, we have Dφy ik )u].6) 3.) + L ) f M jk g Mjk y jk ) = + L ) f M jk M jk δ k M j k +L f M / j k 3.4) + γ u )L f R γ u L f R m+)k+ m) γ u L f L 0 m+)k+ m). Choosing m = k k+)k+4), we get m + )k + m) 4. Theorem 5 Let function φ be strongly convex on Q with convexity parameter µ φ. γ u, then for any k 0 we have µ φ L f If Otherwise, φy k ) φx ) φy k ) φx ) γu L f µ φ ) k φy0 ) φx )) k φy 0 ) φx )). 3.4) µ φ 4γ u L f ) k φy0 ) φx )). 3.5)

Proof: Since φ is strongly convex, for any k 0 we have Denote y k α) = αx + α)y k Q, α 0, ]. Then, φy k ) φx ) µ φ y k x. 3.6) φy k+ ).9) 3.4) min φαx + α)y k ) + M kα 0α y k x ] min φy k ) αφy k ) φx )) + γul f 0α α y k x ] 3.6) min φy k ) α α γul ) ] f 0α µ φ φy k ) φx )). { µ = min, φ γ u L f }. Hence, if The minimum of the last expression is achieved for α µ φ γ u L f, then α = and we get φy k+ ) φx ) γ ul f µ φ φy k ) φy )) φy k) φy )). If µ φ γ u L f, then α = µ φ γ u L f and φy k+ ) φx ) µ ) φ 4γ u L f φy k ) φy )). Remark ) In Theorem 5, the condition number L f µ φ can be smaller than one. ) For strongly convex φ, the bounds on the directional derivatives can be obtained by combining the inequalities 3.4), 3.5) with the estimate φy k ) φx ).3):L=L f L f g Lf y k ) and inequality.6). Thus, inequality 3.4) results in the bound ) γul k/ Dφy k+ )u] f µ φ L f φy 0 ) φ ), 3.7) and inequality 3.5) leads to the bound Dφy k+ )u] µ ) k/ φ 4γ ul f L f φy 0 ) φ ), 3.8) which are valid for all u Fy k+ ) with u =. 3) The estimate 3.6) may create an impression that a large value of γ u can reduce the total number of calls of the oracle. This is not true since γ u enters also into the estimate of the rate of convergence of the methods e.g. 3.)). Therefore, reasonable values of this parameter lie in the interval, 3]. 3

4 Accelerated scheme In the previous section, we have seen that, for convex f, the gradient method 3.3) converges as O k ). However, it is well known that on convex problems the usual gradient scheme can be accelerated e.g. Chapter in 9]). Let us show that the same acceleration can be achieved for composite objective functions. Consider the problem min x E φx) = fx) + Ψx) ], 4.) where function f is convex and satisfies.), and function Ψ is closed and strongly convex on E with convexity parameter µ Ψ 0. We assume this parameter to be known. The case µ Ψ = 0 corresponds to convex Ψ. Denote by x the optimal solution to 4.). In problem 4.), we allow dom Ψ E. Therefore, the formulation 4.) covers also constrained problem instances. Note that for 4.), the first-order optimality condition.9) defining the composite gradient mapping can be written in a simpler form: T L y) dom Ψ, fy) + ξ L y) = LBy T L y)) g L y), 4.) where ξ L y) ΨT L y)). For justifying the rate of convergence of different schemes as applied to 4.), we will use the machinery of estimate functions in its newer variant 3]. Taking into account the special form of the objective in 4.), we update recursively the following sequences: A minimizing sequence {x k } k=0. A sequence of increasing scaling coefficients {A k } k=0 : A sequence of estimate functions A 0 = 0, A k def = A k + a k, k. ψ k x) = l k x) + A k Ψx) + x x 0 k 0, 4.3) where x 0 dom Ψ is our starting point, and l k x) are linear functions in x E. However, as compared with 3], we will add a possibility to update the estimates for Lipschitz constant L f, using the initial guess L 0 satisfying 3.), and two adjustment parameters γ u > and γ d. For the above objects, we maintain recursively the following relations: R k : A kφx k ) ψk min ψ kx), x R k : ψ kx) A k φx) + x x 0, x E., k 0. 4.4) These relations clearly justify the following rate of convergence of the minimizing sequence: φx k ) φx ) x x 0 A k, k. 4.5) 4

Denote v k = arg min x E ψ kx). Since µ ψk, for any x E we have A k φx k ) + x v k R k ψ k + x v k ψ k x) R k A k φx) + x x 0. Hence, taking x = x, we get two useful consequences of 4.4): x v k x x 0, v k x 0 x x 0, k. 4.6) Note that the relations 4.4) can be used for justifying the rate of convergence of a dual variant of the gradient method 3.3). Indeed, for v 0 dom Ψ define ψ 0 x) = x v 0, and choose L 0 satisfying condition 3.). Dual Gradient Method DGv 0, L 0 ), k 0. y k = Gv k, L k ).T, M k = Gv k, L k ).L, L k+ = max{l 0, M k /γ d }, a k+ = M k, 4.7) ψ k+ x) = ψ k x) + M k fv k ) + fv k ), x v k + Ψx)]. In this scheme, the operation G is defined by 3.). Since Ψ is simple, the points v k are easily computable. Note that the relations R 0 and R k, k 0, are trivial. Relations R k can be justified by induction. Define x 0 = y 0, φ k = min φy i), and x k : φx k ) = φ k for k. Then 0ik { } ψk+ = min ψ k x) + x M k fv k ) + fv k ), x v k + Ψx)] R k { } A k φ k + min x x v k + M k fv k ) + fv k ), x v k + Ψx)].7) = A k φ k + a k+ m Mk v k ; y k ) 3.) A k φ k + a k+ φy k ) A k+ φ k+. Thus, relations R k are valid for all k 0. Since the values M k satisfy bounds 3.4), for method 4.7) we obtain the following rate of convergence: φx k ) φx ) γ ul f k x v 0, k. 4.8) Note that the constant in the right-hand side of this inequality is four times smaller than the constant in 3.). However, each iteration in the dual method is two times more expensive as compared to the primal version 3.3). 5

However, the method 4.7) does not implement the best way of using the machinery of estimate functions. Let us look at the accelerated version of 4.7). As parameters, it has the starting point x 0 dom Ψ, the lower estimate L 0 > 0 for the Lipschitz constant L f, and a lower estimate µ 0, µ Ψ ] for the convexity parameter of function Ψ. Accelerated method Ax 0, L 0, µ) Initial settings: ψ 0 x) = x x 0, A 0 = 0. Iteration k 0 Set: L := L k. Repeat: Find a from quadratic equation a A k +a = +µa k L. ) Set y = A kx k +av k A k +a, and compute T L y). 4.9) if φ T L y)), y T L y) < L φ T L y)), then L := L γ u. Until: φ T L y)), y T L y) L φ T L y)). ) Define: y k := y, M k := L, a k+ := a, L k+ := M k /γ d, x k+ := T Mk y k ), ψ k+ x) := ψ k x) + a k+ fx k+ ) + fx k+ ), x x k+ + Ψx)]. As compared with Gradient Iteration 3.), we use a damped relaxation condition ) as a stopping criterion of the internal cycle of 4.9). Lemma 5 Condition **) in 4.9) is satisfied for any L L f. Proof: Denote T = T L y). Multiplying the representation φ T ) = ft ) + ξ L y) 4.) = LBy T ) + ft ) fy) 4.0) 6

by the vector y T, we obtain φ T ), y T = L y T fy) ft ), y T 4.0) = L φ T ) + L fy) ft ), y T fy) ft ) ] fy) ft ), y T = L φ T ) + fy) ft ), y T L fy) ft ). Hence, for L L f condition **) is satisfied. Thus, we can always guarantee L k M k γ u L f. 4.) If γ d γ u, then the upper bound 3.5) remains valid. Let us establish a relation between the total number of calls of oracle N k after k iterations, and the value of the iteration counter. Lemma 6 In the method 4.9), for any k 0 we have ] N k + ln γ d ln γ u k + ) + ln γ u ln γ ul f γ d L 0. 4.) Proof: Denote by n i the number of calls of the oracle at iteration i 0. At each cycle of the internal loop we call the oracle twice for computing fy) and ft L y)). Therefore, Thus, Hence, we can compute N k = k i=0 L i+ = γ d L i γ 0.5n i u. n i = + ln γ d ln γ u + ln γ u ln L ] i+ L i. ] n i = + ln γ d ln γ u k + ) + ln γ u ln L k+ L 0. 4.) In remains to note that L k+ γ u γd L f. Thus, each iteration of 4.9) needs approximately two times more calls of the oracle than one iteration of the Gradient Method: γ u = γ d = N k 4k + ) + log L f L 0, L k L f. 4.3) However, we will see that the rate of convergence of 4.9) is much higher. Let us start from two auxiliary statements. 7

Lemma 7 Assume µ Ψ µ. Then the sequences {x k }, {A k } and {ψ k }, generated by the method Ax 0, L 0, µ), satisfy relations 4.4) for all k 0,. Proof: Indeed, in view of initial settings of 4.9), A 0 = 0 and ψ0 = 0. Hence, for k = 0, both relations 4.4) are trivial. Assume now that relations R k, R k are valid for some k 0. In view of R k, for any x E we have ψ k+ x) A k φx) + x x 0 + a k+ fx k+ ) + fx k+ ), x x k+ + Ψx)] A k + a k+ )φx) + x x 0, and this is R k+. Let us show that the relation R k+ is also valid. Indeed, in view of 4.3), function ψ k x) is strongly convex with convexity parameter + µa k. Hence, in view of R k, for any x E, we have Therefore ψ k x) ψ k + +µa k x v k A k φx k ) + +µa k x v k. 4.4) ψ k+ = min x E 4.4) min x E {ψ kx) + a k+ fx k+ ) + fx k+ ), x x k+ + Ψx)]} { } A k φx k ) + +µa k x v k + a k+ φx k+ ) + φ x k+ ), x x k+ ] min x E {A k + a k+ )φx k+ ) + A k φ x k+ ), x k x k+ +a k+ φ x k+ ), x x k+ + +µa k x v k } 4.9) = min x E {A k+φx k+ ) + φ x k+ ), A k+ y k a k+ v k A k x k+ +a k+ φ x k+ ), x x k+ + +µa k x v k } = min x E {A k+φx k+ ) + A k+ φ x k+ ), y k x k+ +a k+ φ x k+ ), x v k + +µa k x v k }. The minimum of the later problem is attained at x = v k a k+ +µa k B φ x k+ ). Thus, we have proved inequality ψ k+ A k+ φx k+ ) + A k+ φ x k+ ), y k x k+ a k+ +µa k ) φ x k+ ). On the other hand, by the termination criterion in 4.9), we have φ x k+ ), y k x k+ M k φ x k+ ). 8

It remains to note that in 4.9) we choose a k+ from the quadratic equation A k+ A k + a k+ = M ka k+ +µa k ). Thus, R k+ is valid. Thus, in order to use inequality 4.5) for deriving the rate of convergence of method Ax 0, L 0, µ), we need to estimate the rate of growth of the scaling coefficients {A k } k=0. Lemma 8 For any µ 0, the scaling coefficients grow as follows: For µ > 0, the rate of growth is linear: A k k γ u L f, k 0. 4.5) A k γ u L f + ] k ) µ γ u L f, k. 4.6) Proof: Indeed, in view of equation ) in 4.9), we have: A k+ A k+ + µa k ) = M k A k+ A k ) = M k A / ] k+ A/ k A / ] k+ + A/ k A k+ M k A / ] k+ 4.) A/ k A k+ γ u L f A / ] k+ A/ k. Thus, for any k 0 we get A / k k. If µ > 0, then, by the same reasoning as γu L f above, we obtain µa k A k+ < A k+ + µa k ) A k+ γ u L f A / ] k+ A/ k. Hence, A / k+ A/ k + ] µ γ ul f. Since A = 4.) M 0 γ ul f, we come to 4.6). Now we can summarize all our observations. Theorem 6 Let the gradient of function f be Lipschitz continuous with constant L f. Also, let the parameter L 0 satisfy condition 3.). Then the rate of convergence of the method Ax 0, L 0, 0) as applied to the problem 4.) can be estimated as follows: φx k ) φx ) γul f x x 0 k, k. 4.7) If in addition the function Ψ is strongly convex, then the sequence {x k } k= generated by Ax 0, L 0, µ Ψ ) satisfies both 4.7) and the following inequality: φx k ) φx ) γ ul f 4 x x 0 + ] k ) µ Ψ γ u L f, k. 4.8) In the next section we will show how to apply this result in order to achieve some specific goals for different optimization problems. 9

5 Different minimization strategies 5. Strongly convex objective with known parameter Consider the following convex constrained minimization problem: min x Q ˆfx), 5.) where Q is a closed convex set, and ˆf is a strongly convex function with Lipschitz continuous gradient. Assume the convexity parameter µ ˆf to be known. Denote by σ Q x) an indicator function of set Q: σ Q x) = { 0, x Q, +, otherwise. We can solve the problem 5.) by two different techniques.. Reallocating the prox-term in the objective. For µ 0, µ ˆf ], define fx) = ˆfx) µ x x 0, Ψx) = σ Q x) + µ x x 0. 5.) Note that function f in 5.) is convex and its gradient is Lipschitz continuous with L f = L ˆf µ. Moreover, the function Ψx) is strongly convex with convexity parameter µ. On the other hand, φx) = fx) + Ψx) = ˆfx) + σ Q x). Thus, the corresponding unconstrained minimization problem 4.) coincides with constrained problem 5.). Since all conditions of Theorem 6 are satisfied, the method Ax 0, L 0, µ) has the following performance guarantees: ˆfx k ) ˆfx ) γ ul ˆf µ) x x 0 ] k ), k. + µ γul ˆf µ) 5.3) This means that an ɛ-solution of problem 5.) can be obtained by this technique in O ) L ˆf µ ln ɛ 5.4) iterations. Note that the same problem can be solved also by the Gradient Method 3.3). However, in accordance to 3.5), its performance guarantee is much worse; it needs ) L O ˆf µ ln ɛ iterations.. Restart. For problem 5.), define the following components of composite objective function in 4.): fx) = ˆfx), Ψx) = σ Q x). 5.5) 0

Let us fix an upper bound N for the number of iterations in A. Consider the following two-level process: Choose u 0 Q. Compute u k+ as a result of N iterations of Au k, L 0, 0), k 0. 5.6) In view of definition 5.5), we have ˆfu k+ ) ˆfx ) 4.7) γ ul ˆf x u k N γul ˆf ˆfu k ) ˆfx )] µ ˆf N. γu L Thus, taking N = ˆf µ, we obtain ˆf ˆfu k+ ) ˆfx ) ˆfu k ) ˆfx )]. Hence, the performance guarantees of this technique are of the same order as 5.4). 5. Approximating the first-order optimality conditions In some applications, we are interested in finding a point with small residual of the system of the first-order optimality conditions. Since DφT L x))u].6) + L f L ).3) φx) φx g L x) L + L f ) ) L L f u FT L x)), u =, 5.7) the upper bounds on this residual can be obtained from the estimates on the rate of convergence of method 4.9) in the form 4.7) or 4.8). However, in this case, the first inequality does not give a satisfactory result. Indeed, it can guarantee that the right-hand side of inequality 5.7) vanishes as O k ). This rate is typical for the Gradient Method see 3.)), and from accelerated version 4.9) we can expect much more. Let us show how we can achieve a better result. Consider the following constrained optimization problem: min x Q fx), 5.8) where Q is a closed convex set, and f is a convex function with Lipschitz continuous gradient. Let us fix a tolerance parameter δ > 0 and a starting point x 0 Q. Define Ψx) = σ Q x) + δ x x 0. Consider now the unconstrained minimization problem 4.) with composite objective function φx) = fx) + Ψx). Note that function Ψ is strongly convex with parameter µ Ψ = δ. Hence, in view of Theorem 6, the method Ax 0, L 0, δ) converges as follows: φx k ) φx ) γul f 4 x x 0 + ] k ) δ γ u L f. 5.9)

For simplicity, we can choose γ u = γ d in order to have L k L f for all k 0. Let us compute now T k = Gx k, L k ).T and M k = Gx k, L k ).L. Then φx k ) φx ) φx k ) φt k ) and we obtain the following estimate: 5.9) g Mk x k ).7) M k g Mk x k ), L 0 M k γ u L f, γ ul f x x / 0 + ] k δ γ u L f. 5.0) In our case, the first-order optimality conditions 4.) for computing T Mk x k ) can be written as follows: fx k ) + δbt k x 0 ) + ξ k = g Mk x k ), 5.) where ξ k σ Q T k ). Note that for any y Q we have 0 = σ Q y) σ Q T k ) + ξ k, y T k = ξ k, y T k. 5.) Hence, for any direction u FT k ) with u = we obtain ft k ), u.) fx k ), u L f M k g Mk x k ) 5.) = g Mk x k ) δbt k x 0 ) ξ k, u L f M k g Mk x k ) 5.) = δ T k x 0 + L ) f M k g Mk x k ). Assume now that the size of the set Q does not exceed R, and δ = ɛ L 0. Let us choose the number of iterations k from inequality + ɛl 0 γ u L f ] k ɛ. Then the residual of the first-order optimality conditions satisfies the following inequality: ft k ), u ɛ R L 0 + γ ul f + L )] f / L 0, u FT k ), u =. 5.3) For that, the required number of iterations k is at most of the order O ɛ ln ɛ 5.3 Unknown parameter of strongly convex objective In Section 5. we have discussed two efficient strategies for minimizing a strongly convex function with known estimate of convexity parameter µ ˆf. However, usually this information is not available. We can easily get only an upper estimate for this value, for example, by inequality µ ˆf S L x) L ˆf, x Q. Let us show that such a bound can be also used for designing an efficient optimization strategy for strongly convex functions. ).

For problem 5.), assume that we have some guess µ for the parameter µ ˆf starting point u 0 Q. Denote φ 0 x) = ˆfx) + σ Q x). Let us choose and a x 0 = G φ0 u 0, L 0 ).T, M 0 = G φ0 u 0, L 0 ).L, S 0 = G φ0 u 0, L 0 ).S, and minimize the composite objective 5.) by method Ax 0, M 0, µ) using the following stopping criterion: Compute: v k = G φ0 x k, L k ).T, M k = G φ0 x k, L k ).L. Stop the stage: if A): g Mk x k )φ 0 ] g M 0 u 0 )φ 0 ], or B): ) M k A k + S 0 M 0 4 µ. 5.4) If the stage was terminated by Condition A), then we call it successful. In this case, we run the next stage, taking v k as a new starting point and keeping the estimate µ of the convexity parameter unchanged. Suppose that the stage was terminated by Condition B) that is an unsuccessful stage). If µ were a correct lower bound for the convexity parameter µ ˆf, then M k g Mk x k )φ 0 ].7) ˆfxk ) ˆfx ) 4.5) A k x 0 x.) ) A k + S µ 0 M 0 g M0 u 0 )φ 0 ]. Hence, in view of Condition B), in this case the stage must be terminated by Condition A). Since this did not happen, we conclude that µ > µ ˆf. Therefore, we redefine µ := µ, and run again the stage keeping the old starting point x 0. We are not going to present all details of the complexity analysis of the above strategy. It can be shown that, for generating an ɛ-solution of problem 5.) with strongly convex objective, it needs O κ / ˆf ln κ ˆf ) + O κ / ˆf ln κ ˆf ln κ ) ˆf def ɛ, κ ˆf = L ˆf µ ˆf calls of the oracle. The first term in this bound corresponds to the total amount of calls of the oracle at all unsuccessful stages. The factor κ / ln κ ˆf ˆf represents an upper bound on the length of any stage independently on the variant of its termination. 6 Computational experiments We tested the algorithms described above on a set of randomly generated Sparse Least Squares problems of the form Find φ = min φx) def ] = x R n Ax b + x, 6.) 3,

where A a,..., a n ) is an m n dense matrix with m < n. All problems were generated with known optimal solutions, which can be obtained from the dual representation of the initial primal problem 6.): ] min x R n Ax b + x = min x R n = max u R m ] max u, b Ax u R m u + x ] min b, u x R n u AT u, x + x ] = max b, u u R m u : AT u. 6.) Thus, the problem dual to 6.) consists in finding a Euclidean projection of the vector b R m onto the dual polytope D = {y R m : A T y }. This interpretation explains the changing sparsity of the optimal solution x τ) to the following parametric version of problem 6.): min φ τ x) def ] = x R n Ax b + τ x Indeed, for τ > 0, we have ] φ τ x) = τ A x τ b τ + x τ. 6.3) Hence, in the dual problem, we project vector b τ onto the polytope D. The nonzero components of x τ) correspond to the active facets of D. Thus, for τ big enough, we have b τ int D, which means x τ) = 0. When τ decreases, we get x τ) more and more dense. Finally, if all facets of D are in general position, we get in x τ) exactly m nonzero components as τ 0. In our computational experiments, we compare three minimization methods. Two of them maintain recursively relations 4.4). This feature allows us to classify them as primal-dual methods. Indeed, denote As we have seen in 6.), φ u) = u b, u. φx) + φ u) 0 x R n, u D. 6.4) Moreover, the lower bound is achieved only at the optimal solutions of the primal and dual problems. For some sequence {z i } i=, and a starting point z 0 dom Ψ, relations 4.4) ensure A k φx k ) min x R n { k } a i fz i ) + fz i ), x z i ] + A k Ψx) + x z 0. 6.5) i= 4

In our situation, fx) = Ax b, Ψx) = x, and we choose z 0 = 0. Denote u i = b Az i. Then fz i ) = A T u i, and therefore fz i ) fz i ), z i = u i + A T u i, z i = b, u i u i = φ u i ). Denoting we obtain ū k = A k k a i u i, 6.6) i= A k φx k ) + φ ū k )] A k φx k ) + k a i φ u i ) 6.5) min x R n { k i= } a i fz i ), x + A k Ψx) + x i= x=0 0. In view of 6.4), u k cannot be feasible: φ ū k ) φx k ) φ = min u D φ u). 6.7) Let us measure the level of infeasibility of these points. Note that the minimum of optimization problem in 6.5) is achieved at x = v k. Hence, the corresponding first-order optimality conditions ensure k a i A T u i + Bv k A k. i= Therefore, a i, ū k + A k Bv k ) i), i =,..., n. Assume that the matrix B in.3) is diagonal: { B i,j) di, i = j, = 0, otherwise. Then, a i, ū k d i A k v i) k, and ρū k ) def = n i= ] / d i a i, ū k ) + A k v k 4.6) A k x, 6.8) where α) + = max{α, 0}. Thus, we can use function ρ ) as a dual infeasibility measure. In view of 6.7), it is a reasonable stopping criterion for our primal-dual methods. For generating the random test problems, we apply the following strategy. Choose m m, the number of nonzero components of the optimal solution x of problem 6.), and parameter ρ > 0 responsible for the size of x. Generate randomly a matrix B R m n with elements uniformly distributed in the interval, ]. Generate randomly a vector v R m with elements uniformly distributed in 0, ]. Define y = v / v. 5

Sort the entries of vector B T y in the decreasing order of their absolute values. For the sake of notation, assume that it coincides with a natural order. For i =,..., n, define a i = α i b i with α i > 0 chosen accordingly to the following rule: b i,y for i =,..., m, α i =, if b i, y 0. and i > m, ξ i b i,y, otherwise, where ξ i are uniformly distributed in 0, ]. For i =,..., n, generate the components of the primal solution: ξ i sign a i, y ), for i m, x ] i) = 0, otherwise, where ξ i are uniformly distributed in Define b = y + Ax. ρ 0, m ]. Thus, the optimal value of the randomly generated problem 6.) can be computed as φ = y + x. In the first series of tests, we use this value in the termination criterion. Let us look first at the results of minimization of two typical random problem instances. 6

The first problem is relatively easy. Problem : n = 4000, m = 000, m = 00, ρ =. PG DG AC Gap k #Ax SpeedUp k #Ax SpeedUp k #Ax SpeedUp 4 0.% 4 0.85% 4 0.4% 3 8 0.0% 3 0.8% 4 8.4% 0 9 0.4% 8 38 0.89% 8 60.47% 3 8 83 0.3% 5 3.7% 4 08 4.06% 4 59 476 0.88% 56 777 3.45% 40 36 7.50% 5 557 670.53% 565 84 6.% 74 588 9.47% 6 954 86.3% 94 470 5.7% 98 780 5.79% 7 55 3765 0.86% 57 68 3.45% 8 940 8.6% 8 430 49 0.49% 466 738.0% 38 096.73% 9 547 464 0.6% 63 8080.3% 56 40 8.9% 0 640 490 0.4% 743 873 0.6% 73 380 4.97% 7 567 0.07% 849 943 0.33% 88 500 3.0% 788 5364 0.04% 935 967 0.7% 0 608.67% 3 847 5539 0.0% 003 003 0.09% 6 70 0.96% 4 898 5693 0.0% 06 0303 0.05% 30 836 0.55% 5 944 583 0.0% 3 0563 0.05% 48 968 0.3% 6 987 596 0.00% 64 087 0.03% 65 0.9% 7 09 6085 0.00% 7 083 0.0% 79 4 0.0% 8 07 65 0.00% 7 357 0.0% 305 43 0.06% 9 0 6359 0.00% 33 65 0.00% 34 504 0.03% 0 65 6495 0.00% 448 38 0.00% 39 544 0.0% In this table, the column Gap shows the relative decrease of the initial residual. In the rest of the table, we can see the computational results for three methods: Primal gradient method 3.3), abbreviated as PG. Dual version of the gradient method 4.7), abbreviated as DG. Accelerated gradient method 4.9), abbreviated as AC. In all methods, we use the following values of the parameters: γ u = γ d =, x 0 = 0, L 0 = max in a i L f, µ = 0. Let us explain the remaining columns of this table. For each method, the column k shows the number of iterations necessary for reaching the corresponding reduction of the initial gap in the function value. Column Ax shows the necessary number of matrix-vector multiplications. Note that for computing the value fx) we need one multiplication. If in addition, we need to compute the gradient, we need one more multiplication. For example, in accordance to the estimate 4.3), each iteration of 4.9) needs four computations of the pair function/gradient. Hence, in this method we can expect eight matrix-vector multiplications per iteration. For the Gradient Method 3.3), we need in average two calls of the oracle. However, one of them is done in the line-search 7

procedure 3.) and it requires only the function value. Hence, in this case we expect to have three matrix-vector multiplications per iteration. In the table above, we can observe the remarkable accuracy of our predictions. Finally, the column SpeedUp represents the absolute accuracy of current approximate solution as a percentage of the worst-case estimate given by the corresponding rate of convergence. Since the exact L f is unknown, we use L 0 instead. We can see that all methods usually significantly outperform the theoretically predicted rate of convergence. However, for all of them, there are some parts of the trajectory where the worst-case predictions are quite accurate. This is even more evident from our second table, which corresponds to a more difficult problem instance. Problem : n = 5000, m = 500, m = 00, ρ =. PG DG AC Gap k #Ax SpeedUp k #Ax SpeedUp k #Ax SpeedUp 4 0.4% 4 0.96% 4 0.6% 6 0.0% 8 0.8% 3 4 0.9% 5 7 0.% 5 4 0.8% 5 40.49% 3 33 0.9% 45 0.77% 8 64.83% 4 38 3 0.30% 38 90.% 9 48 5.45% 5 34 703 0.9% 38 89 3.69% 5 46 0.67% 6 07 308.98% 06 58 7.89% 06 848 43.08% 7 40 706.3% 387 933 9.7% 60 80 48.70% 8 368 043.77% 3664 838 7.05% 04 68 39.54% 9 4677 4030.% 4664 338 4.49% 45 956 8.60% 0 540 630 0.65% 539 6958.6% 88 300 9.89% 5938 785 0.36% 5879 9393.4% 330 636 3.06% 6335 9006 0.9% 68 3088 0.77% 370 956 8.0% 3 6637 99 0.0% 647 3353 0.4% 40 3 4.77% 4 6859 0577 0.05% 6670 33348 0.% 49 344.7% 5 70 06 0.03% 6835 3473 0.3% 453 366.49% 6 76 483 0.0% 6978 34888 0.05% 47 3764 0.83% 7 78 84 0.0% 708 35539 0.05% 485 387 0.4% 8 737 5 0.00% 75 363 0.03% 509 4068 0.4% 9 7438 33 0.00% 7335 36673 0.0% 55 49 0.% 0 749 474 0.00% 7433 3763 0.0% 547 437 0.07% In this table, we can see that the Primal Gradient Method still significantly outperforms the theoretical predictions. This is not too surprising since it can, for example, automatically accelerate on strongly convex functions see Theorem 5). All other methods require in this case some explicit changes in their schemes. However, despite all these discrepancies, the main conclusion of our theoretical analysis seems to be confirmed: the accelerated scheme 4.9) significantly outperforms the primal and dual variants of the Gradient Method. In the second series of tests, we studied the abilities of the primal-dual schemes 4.7) and 4.9) in decreasing the infeasibility measure ρ ) see 6.8)). This problem, at least for the Dual Gradient Method 4.7), appears to be much harder than the primal minimization 8

problem 6.). Let us look at the following results. Problem 3: n = 500, m = 50, m = 5, ρ =. DG AC Gap k #Ax φ SpeedUp k #Ax φ SpeedUp 8.5 0 0 8.6% 6 3.6 0 0.80% 5 5.4 0 0 9.35% 7 56 8.8 0 5.55% 3 64 6.0 0 3.7% 88 5.3 0 0.96% 3 6 30 3.9 0.69% 5 0 4.4 0 9.59% 4 48 39.7 0.3% 64 3. 0 9.% 5 03 54.6 0 3.8% 35 76.8 0 5.83% 6 43 8.3 0 5.64% 54 43.0 0 3.75% 7 804 409 3.0 0 5.93% 86 688 4.6 0 39.89% 8 637 883 6.3 0 3 6.4% 976.8 0 40.% 9 398 6488 4.6 0 4 6.6% 69 348 5.3 0 3 38.58% 0 4837 476.8 0 7 9.33% 4 788 7.7 0 4 34.8% 494 470. 0 4 9.97% 30 404 8.0 0 5 30.88% 549 5734.3 0 5 5.6% 49 335.7 0 5 9.95% 3 5790 8944.3 0 5.9% 584 4668 5.3 0 6 9.% 4 6474 3364 0.0.67% 649 588 4. 0 7 9.48% In this table we can see the computational cost for decreasing the initial value of ρ in 4 0 4 times. Note that both methods require more iterations than for Problem, which was solved up to accuracy in the objective function of the order 0 0 6. Moreover, for reaching the required level of ρ, method 4.7) has to decrease the residual in the objective up to machine precision, and the norm of gradient mapping up to 0. The accelerated scheme is more balanced: the final residual in φ is of the order 0 6, and the norm of the gradient mapping was decreased only up to.3 0 3. Let us look at a bigger problem. Problem 4: n = 000, m = 00, m = 50, ρ =. DG AC Gap k #Ax φ SpeedUp k #Ax φ SpeedUp 8 3.7 0 0 6.4% 4. 0 0.99% 5 4.0 0 0 7.75% 7 56.4 0 0.7% 5 74.0 0 0.56% 96 8.7 0 5.49% 3 37 83 6.9 0 4.73% 7 3 6.8 0 6.66% 4 83 44 4.5 0 6.49% 6 08 4.7 0 0.43% 5 98 989.4 0 9.79% 4 336.5 0 6.76% 6 445 4 7.8 0.8% 65 50.0 0 3.4% 7 38 6639. 0 33.5% 9 74 3.6 0 3.50% 8 675 3373 4. 0 3 33.48% 5 996. 0 30.07% 9 4508 535 5.6 0 5 8.% 76 404.6 0 3 7.85% 0 470 3503.7 0 0 4.7% 40 96 4.4 0 4 6.08% 4869 4334. 0 5 7.6% 38 60 7.7 0 5 6.08% 636 375. 0 5 4.88% 465 376 6.5 0 6 6.0% 3 88 6436. 0 5 5.0% 638 5096.4 0 6 4.6% 4 6354 8766 4.4 0 5 5.4% 704 568 7.8 0 7 4.6% 9

As compared with Problem 3, in Problem 4 the sizes are doubled. This makes almost no difference for the accelerated scheme, but for the Dual Gradient Method, the computational expenses grow substantially. The further increase of dimension makes the latter scheme impractical. Let us look at how these methods work on Problem with ρ ) being a termination criterion. Problem a: n = 4000, m = 000, m = 00, ρ =. DG AC Gap k #Ax φ SpeedUp k #Ax φ SpeedUp 8.3 0.88%.4 0 0.99% 5 4. 0 3.44% 8 60 8. 0 0 7.0% 7 83 5.8 0 0 6.00% 3 00 4.6 0 0 0.% 3 44 9 3.5 0 0 7.67% 0 60 3.5 0 0.0% 4 00 497.7 0 0 8.94% 8 0.9 0 0.0% 5 34 68.9 0 0 0.5% 44 348. 0 0 4.79% 6 63 353.0 0 0 4.8% 78 60.0 0 0 3.46% 7 94 9568.0 0.50% 7 93.9 0 6.44% 8 3704 854 4.6 0 7 0.77% 57 5 6.8 0 3.88% 9 373 8678.4 0 4 5.77% 688 5.3 0 3.63% 0 Line search failure... 87 88.0 0 4 9.87% 39 30.5 0 5 8.43% 5 468 7.0 0 6 6.48% 3 693 5536 4.5 0 7 4.40% 4 745 5948 3.8 0 7 3.76% The reason of the failure of the Dual Gradient Method is quite interesting. In the end, it generates the points with very small residual in the value of the objective function. Therefore, the termination criterion in the gradient iteration 3.) cannot work properly due to the rounding errors. In the accelerated scheme 4.9), this does not happen since the decrease of the objective function and the dual infeasibility measure is much more balanced. In some sense, this situation is natural. We have seen that on the current test problems all methods converge faster at the end. On the other hand, the rate of convergence of the dual variables ū k is limited by the rate of growth of coefficients a i in the representation 6.6). For the Dual Gradient Method, these coefficients are almost constant. For the accelerated scheme, they grow proportionally to the iteration counter. We hope that the numerical examples above clearly demonstrate the advantages of the accelerated gradient method 4.9) with the adjustable line search strategy. It is interesting to check numerically how this method works in other situations. Of course, the first candidates to try are different applications of the smoothing technique 0]. However, even for the Sparse Least Squares problem 6.) there are many potential improvements. Let us discuss one of them. Note that we treated the problem 6.) by a quite general model.) ignoring the important fact that the function f is quadratic. The characteristic property of quadratic functions is that they have a constant second derivative. Hence, it is natural to select the operator B in metric.3) taking into account the structure of the Hessian of function f. Let us define B = diag A T A) diag fx)). Then e i = Be i, e i = a i = Ae i = fx)e i, e i, i =,..., n, 30