Radial Subgradient Descent

Similar documents
Math 273a: Optimization Subgradients of convex functions

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Barrier Method. Javier Peña Convex Optimization /36-725

Math 273a: Optimization Subgradient Methods

BASICS OF CONVEX ANALYSIS

Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity

Convex Optimization Conjugate, Subdifferential, Proximation

IE 521 Convex Optimization

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

LECTURE 9 LECTURE OUTLINE. Min Common/Max Crossing for Min-Max

Optimization and Optimal Control in Banach Spaces

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

Subgradient Method. Ryan Tibshirani Convex Optimization

Optimization methods

Largest dual ellipsoids inscribed in dual cones

5. Subgradient method

Iterative Convex Optimization Algorithms; Part One: Using the Baillon Haddad Theorem

Convex Analysis Notes. Lecturer: Adrian Lewis, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE

Sequential Unconstrained Minimization: A Survey

Optimality Conditions for Nonsmooth Convex Optimization

Lecture: Duality of LP, SOCP and SDP

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

Convex Analysis Background

Chapter 2 Convex Analysis

5. Duality. Lagrangian

The proximal mapping

A Unified Approach to Proximal Algorithms using Bregman Distance

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

Math 273a: Optimization Subgradients of convex functions

CHAPTER 5. Birkhoff Orthogonality and a Revisit to the Characterization of Best Approximations. 5.1 Introduction

Chapter 2: Preliminaries and elements of convex analysis

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization Notes

Constraint qualifications for convex inequality systems with applications in constrained optimization

Existence of minimizers

Convex Functions. Pontus Giselsson

Convex Analysis and Optimization Chapter 4 Solutions

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Optimization Chapter 2 Solutions

Additional Homework Problems

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Douglas-Rachford splitting for nonconvex feasibility problems

constrained convex optimization, level-set technique, first-order methods, com-

12. Interior-point methods

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Convex Optimization Boyd & Vandenberghe. 5. Duality

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Extended Monotropic Programming and Duality 1

10. Unconstrained minimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

An inexact strategy for the projected gradient algorithm in vector optimization problems on variable ordered spaces

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Helly's Theorem and its Equivalences via Convex Analysis

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Constrained Optimization and Lagrangian Duality

CONTINUOUS CONVEX SETS AND ZERO DUALITY GAP FOR CONVEX PROGRAMS

Downloaded 09/27/13 to Redistribution subject to SIAM license or copyright; see

8. Conjugate functions

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

Proximal methods. S. Villa. October 7, 2014

Math 273a: Optimization Convex Conjugacy

Conditional Gradient (Frank-Wolfe) Method

Solving conic optimization problems via self-dual embedding and facial reduction: a unified approach

Primal/Dual Decomposition Methods

NONSMOOTH VARIANTS OF POWELL S BFGS CONVERGENCE THEOREM

Optimisation in Higher Dimensions

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Thai Journal of Mathematics Volume 14 (2016) Number 1 : ISSN

Subgradient Method. Guest Lecturer: Fatma Kilinc-Karzan. Instructors: Pradeep Ravikumar, Aarti Singh Convex Optimization /36-725

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

Inequality Constraints

Some Background Math Notes on Limsups, Sets, and Convexity

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

In Progress: Summary of Notation and Basic Results Convex Analysis C&O 663, Fall 2009

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION

ACCELERATED FIRST-ORDER METHODS FOR HYPERBOLIC PROGRAMMING

Nonlinear Optimization for Optimal Control

Convex Optimization M2

Numerical Optimization

f(x) f(z) c x z > 0 1

Optimality, identifiability, and sensitivity

Fast proximal gradient methods

CONSTRAINT QUALIFICATIONS, LAGRANGIAN DUALITY & SADDLE POINT OPTIMALITY CONDITIONS

Finite Convergence for Feasible Solution Sequence of Variational Inequality Problems

Lecture: Duality.

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

An Infeasible Interior Proximal Method for Convex Programming Problems with Linear Constraints 1

Convex Functions and Optimization

Relative-Continuity for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent

Transcription:

Radial Subgradient Descent Benja Grimmer Abstract We present a subgradient method for imizing non-smooth, non-lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis is surprisingly simple. At each iteration, the algorithm takes a subgradient step and then performs a line search to move radially towards (or away from) the known feasible point. Our convergence results have striking similarities to those of traditional methods that require Lipschitz continuity. Costly orthogonal projections typical of subgradient methods are entirely avoided. 1 Introduction We consider the convex optimization problem of imizing a lower-semicontinuous convex function f : R n R when a point x 0 in the interior of the domain of f is known. Importantly, it is not assumed that f is either smooth or Lipschitz. Note that any constrained convex optimization problem fits under this model by setting the function value of all infeasible points to infinity (provided the feasible region is closed and convex, and a point exists in the interior of both the feasible region and the domain of f). This model can easily be extended to allow affine constraints by considering the relative interior of the domain instead. Without loss of generality, we have x 0 = 0 int domf and f( 0) < 0, which can be achieved by replacing f(x) by f(x + x 0 ) f(x 0 ) h (for any positive constant h > 0). Let f = inff(x) denote the function s imum value (which equals if the function is unbounded below). This general imization problem has been the focus of a recent paper by Renegar [1]. Renegar develops a framework for converting the original non-lipschitz problem into an equivalent Lipschitz problem, in a slightly lifted space. This transformation is geometric in nature and applied to a conic reformulation of the original problem. By applying a subgradient approach to the reformulated problem, Renegar achieves convergence bounds analogous to those of traditional methods that assume Lipschitz continuity. One difference in Renegar s approach is that it guarantees relative accuracy of a solution rather than absolute accuracy. A point x has absolute accuracy of ɛ if f(x) f < ɛ. bdg79@cornell.edu; Cornell University, Ithaca NY 1

In contrast, a point x has relative accuracy of ɛ if (f(x) f )/(0 f ) < ɛ. Note that here we measure error relative to an objective value of 0 since we assume f(x 0 ) < 0. This relative accuracy can be interpreted as a multiplicative accuracy through simple algebraic rearrangement: f(x) f 0 f < ɛ f(x) < f (1 ɛ). From this framework, Renegar presents two algorithms addressing when the optimal objective value is known and when it is unknown (referred to below as Algorithms B and A in [1], respectively). The two (paraphrased) convergence bounds stated in Theorems 1 and are then proven. Renegar s analysis requires two additional assumptions on f beyond the basic assumptions we make of lower-semicontinuity and convexity. In particular, they further assume that the imum objective value is finite and attained at some point, and that the set of optimal solutions is bounded. Let, denote an inner product on R n with associated norm. Define D to be the diameter of the sublevel set x f(x) f( 0) and R = supr R f(x) 0 for all x with x r. These scalars appear only in convergence bounds, but are never assumed to be known. Note that the set of optimal solutions must be bounded for D to be finite. Theorem 1 (Renegar [1], Theorem 1.1). If x i is the sequence generated by Algorithm B in [1] (which is given the value of f ), then for any ɛ > 0, some iteration ( ) ( D 1 i 8 R ɛ + 1 ( ɛ log 4/3 1 + D )) f(x i ) f has ɛ. R 0 f Theorem (Renegar [1], Theorem 1.). Consider any 0 < ɛ < 1. If x i is the sequence generated by Algorithm A in [1] (which is given the value of ɛ), then some iteration ( ) ( ( ) D 4 1 ɛ i 4 + 4 1 ɛ ( ) ( ) ) 1 ɛ D + log R 3 ɛ ɛ + log ɛ + 1 has f(x i) f ɛ. R 0 f These bounds are remarkable in that they attain the same rate of growth with respect to ɛ as traditional methods that assume Lipschitz continuity. However, the constants involved could be very large. In particular, the value of D can be enormous if a small perturbation of the problem would have an unbounded set of optimal solutions. Another notable property of Renegar s algorithms is that they completely avoid computing orthogonal projections. Instead only radial projections are done, which only require a line search to compute. This can lead to substantial improvements in runtime over projected subgradient methods, which require an orthogonal projection every iteration. We take a different perspective on the transformation underlying Renegar s framework. Instead of first converting the convex problem into conic form in a lifted space, we define an equivalent transformation directly in terms of the original function. Before stating our function-oriented transformation, we define the perspective function of f for any γ > 0 as f p (x, γ) = γf(x/γ). Note that perspective functions have been studied

in a variety of other contexts. Two interesting recent works have considered optimization problems where perspective functions occur naturally. In [], Combettes and Müller investigate properties of the proximity operator of a perspective function. In [3], Aravkin et al. construct a duality framework based on gauges, which extends to perspective functions. Based on the perspective function of f, our functional version of Renegar s transformation is defined as follows. Definition 1. The radial reformulation of f of level z R is given by γ z (x) = infγ > 0 f p (x, γ) z. As a simple consequence of Renegar s transformation converting non-lipschitz problems into Lipschitz ones, the radial reformulation of any convex function is both convex and Lipschitz (shown in Lemma 1 and Proposition ). As a result, the radial reformulation is amenable to the application of traditional subgradient descent methods, even if f is not. Using this functional version of Renegar s transformation, we present a modified version of Renegar s algorithms which is both conceptually simpler and achieves notably improved convergence bounds. We let γ z (x) denote the set of subgradients of γ z ( ) at x. Then our algorithm is stated in Algorithm 1 with step sizes given by a positive sequence α i. Algorithm 1 Radial Subgradient Descent 1: x 0 = 0, z 0 = f(x 0 ) : for i = 0, 1,... do 3: Select ζ i γ zi (x i ) Pick a subgradient of γ zi ( ) 4: x i+1 = x i α i ζ i Move in the subgradient direction 5: if γ zi ( x i+1 ) = 0 then report unbounded objective and terate. 6: x i+1 = x i+1 γ zi ( x i+1 ), z i+1 = Radially update current solution γ zi ( x i+1 ) 7: end for Critical to the strength of our results is the specification of the initial iterate, x 0 = 0. Each iteration of this algorithm takes a subgradient step (with respect to a radial reformulation γ zi ( ) of changing level) and then moves the resulting point radially towards (or away from) the origin. This radial movement ensures that each iterate x i lies in the domain of f. This algorithm differs from Renegar s approach by doing a radial update at every step. Algorithm A of [1] only does an update when the threshold γ z (x) 3/4 is met, and Algorithm B of [1] never does radial updates. Recently in [4], Freund and Lu presented an interesting approach to first-order optimization with similarities to Renegar s approach. Their method utilizes a similar thresholding condition for doing periodic updates. We first give a general convergence guarantee for the Radial Subgradient Descent algorithm, where dist(x 0, X) denotes the imum distance from x 0 = 0 to a set X. As a consequence, we find that proper selection of the step size will guarantee a subsequence of the iterates has objective values converging to the optimal value. Theorem 3. Let x i be the sequence generated by Algorithm 1 with steps sizes α i. Consider any < 0 with nonempty level set ˆX = x f(x) =. Then for any k 0, some iteration 3

i k either identifies that f is unbounded on the ray t x i+1 t > 0 or has f(x i ) dist(x 0, ˆX) 1 + 0 f(x i ) k ( z j ) k R j=0 α j j=0 α. j z j Corollary 4. Consider any positive sequence β i with β i = and β i <. Let x i be the sequence generated by Algorithm 1 with α i = β i. Then either some iteration identifies that f is unbounded on the ray t x i+1 t > 0 or lim f(x i) = f. k i k Although Corollary 4 proves the convergence of our algorithm, it does not provide any guarantees on the rate of convergence. As done in Renegar s analysis, we bound the convergence rate of our algorithm when the optimal objective f is known and when it is unknown. The resulting convergence bounds for two particular choices of step sizes are given in the following theorems. Theorem 5. Suppose f is finite and attained at some set of points X. Let x i be the sequence generated by Algorithm 1 with α i = f 1 0 f ζ i. Then for any ɛ > 0, some iteration i dist(x0, X ) 1 R ɛ has f(x i ) f 0 f ɛ. Theorem 6. Consider any ɛ > 0 and < 0 with nonempty level set ˆX = x f(x) =. Let x i be the sequence generated by Algorithm 1 with α i = ɛ. Then some iteration ζ i 4 dist(x 0, i ˆX) 1 3 R ɛ either identifies that f is unbounded on the ray t x i+1 t > 0 or has f(x i ) 0 These convergence bounds are remarkably simple and should have fairly small constants in practice. Importantly, the bounds have no dependence on D, which means (unlike [1]) we do not need to assume the problem has a bounded set of optimal solutions. It is also worth noting that this algorithm, like Renegar s algorithms, completely avoids computing orthogonal projections. As a result, its per iteration cost can be drastically less than that of standard projected subgradient descent methods. We defer a deeper discussion of this improvement in iteration cost to [1]. In Section, we establish a number of relevant properties of our radial reformulation, which follow from the equivalent structures in Renegar s framework. Then in Section 3, we prove our main results on the convergence of the Radial Subgradient Descent algorithm. 4 ɛ.

Preliaries Renegar s framework begins by converting the problem of imizing f into conic form. Let K be the closure of (xs, s, ts) s > 0, (x, t) epif, which is the conic extension of the epigraph of f. Note that the restriction of K to s = 1 is exactly the epigraph of f. Then Renegar considers the following equivalent conic form problem t (1) s.t. s = 1, (x, s, t) K. Observe that ( 0, 1, 0) lies in the interior of K. Then Renegar defines the following function, which lies at the heart of the framework. Although this function was originally developed for any conic program, we state it specifically in terms the above program. λ(x, s, t) = infλ (x, s, t) λ( 0, 1, 0) K. At this point, it is easy to show the connection between this function when restricted to t = z and our radial reformulation of level z. Lemma 1. For any x R n and z R, γ z (x) = 1 λ(x, 1, z). Proof. Follows directly from the definitions of K and γ z ( ): λ(x, 1, z) = sup λ (x, 1, z) λ( 0, 1, 0) K x = sup λ 1 λ > 0, ( 1 λ, z 1 λ ) epif ( ) x = sup λ < 1 f z 1 λ 1 λ ( ) x = 1 inf (1 λ) > 0 (1 λ)f z 1 λ = 1 γ z (x). As a consequence of Propositions.1 and 3. of [1], we know that the radial reformulation is both convex and Lipschitz. Proposition. For any z R, the radial reformulation of level z, γ z ( ), is convex and Lipschitz with constant 1/R (independent of the level z). We now see that the radial reformulation is notably more well-behaved than the original function f. However, to justify it as a meaningful proxy for the original function in an optimization setting, we need to relate their imum values. In the following proposition, we establish such a connection between f and any radial reformulation with a negative level. Note that f p is strictly decreasing in γ. To see this, observe that, for any x R n, all sufficiently large γ have f(x/γ) < f( 0)/ < 0. Then it follows that lim f p (x, γ) γ lim γf( 0)/ =. Since perspective functions are convex (see [5] for an elementary proof γ of this fact), we conclude that f p is strictly decreasing in γ. 5

Proposition 3. For any z < 0, the imum value of γ z ( ) is z/f (where z/ := 0). Further, if γ z (x) = 0, then lim t f(tx) = (that is, x is an unbounded direction). Proof. First we show that this imum value lower bounds γ z (x) for all x R n. Since f((f /z)x) f, we know that γ = z/f has f p (x, γ) z. Thus γ z (x) z/f since f p is strictly decreasing in γ. Consider any sequence x i with f(x i ) < 0 and lim f(x i ) = f. Observe that i ( ) z f p f(x i ) x z i, = z f(x i ) f(x i ) f(x i) = z. Since f p is strictly decreasing in γ, we have γ z ((z/f(x i ))x i ) = z/f(x i ). It follows that lim γ z((z/f(x i ))x i ) = z/f. Then our lower bound is indeed the imum value of the i radial reformulation. Our second observation follows from the definition of the radial reformulation (using the change of variables t = 1/γ): γ z (x) = 0 infγ > 0 f(x/γ) < z/γ = 0 supt > 0 f(tx) < tz = lim t f(tx) =. Finally, we give a characterization of the subgradients of our radial reformulation. Although this description is not necessary for our analysis, it helps establish the practicality of the Radial Subgradient Descent algorithm. We see that the subgradients of γ z ( ) can be computed easily from normal vectors of the epigraph of the original function. This result follows as a direct consequence of Proposition 7.1 of [1]. Proposition 4. For any z < 0, the subgradients of the radial reformulation are given by ( ) γz (x) x γ z (x) = ζ, x + δz ζ ( 0, 0) (ζ, δ) N epif γ z (x), z. γ z (x) 3 Analysis of Convergence First, we observe the following two properties hold at each iteration of our algorithm. Lemma 5. At any iteration k 0 of Algorithm 1, γ zk (x k ) = 1. Proof. When k = 0, we have f p (x 0, 1) = z 0, and so the result follows from f p being strictly decreasing in γ. The general case follows from simple algebraic manipulation: γ zk+1 (x k+1 ) = infγ > 0 γf(x k+1 /γ) +1 ( ) xk+1 = inf γ > 0 γf +1 γ zk ( x k+1 )γ ( ) xk+1 = inf γ > 0 γ zk ( x k+1 )γf γ zk ( x k+1 )γ 1 = γ zk ( x k+1 ) γ ( x k+1 ) = 1. 6

Lemma 6. At any iteration k 0 of Algorithm 1, the following ordering holds f f(x k ) < 0. Proof. The first inequality follows from f being the imum value of f. The second inequality is trivially true when k = 0. From the lower-semicontinuity of f p (x, γ), we know f p ( x k+1, γ zk ( x k+1 )). Then we have the second inequality in general since f(x k+1 ) = f( x k+1 /γ zk ( x k+1 )) /γ zk ( x k+1 ) = +1. The third inequality follows inductively since z 0 < 0 and +1 = /γ zk ( x k+1 ) < 0. The traditional analysis of subgradient descent, assug Lipschitz continuity, is based on an elementary inequality, which is stated in the following lemma. Lemma 7. Consider any convex function g : R n R, x, y R n, and ζ g(x). Then for any α > 0, (x αζ) y x y α(g(x) g(y)) + α ζ. Proof. Follows directly from applying the subgradient inequality, g(y) g(x) + ζ, y x : (x αζ) y = x y α ζ, x y + α ζ x y α(g(x) g(y)) + α ζ. The core of proving the traditional subgradient descent bounds is to inductively apply this lemma at each iteration. However, such an approach cannot be applied directly to Algorithm 1 since the iterates are rescaled every iteration and the underlying function changes every iteration. The key to proving our convergence bounds is setting up a modified inequality that can be applied inductively. This is done in the following lemma. Lemma 8. Consider any y with f(y) < 0. Then at any iteration k 0 of Algorithm 1, f(y) x k+1 y f(y) ( ) x k y f(y) f(y) f(y) α k + αk ζ k. +1 0 Proof. Applying Lemma 7 on γ zk ( ) with x k and x k+1 f(y) y x k The value of γ zk ( zk f(y) y ( ) zk Then γ zk f(y) y 5 allows us to restate our inequality as x k+1 f(y) y α k f(y) y implies ( ( zk γ zk (x k ) γ zk f(y) y ) can be derived directly. Observe that ( ) f p zk f(y) y, = f(y) f(y) f(y) =. )) + αk ζ k. () = f(y) since f p is strictly decreasing in γ. Combining this with Lemma f(y) y x k z ( k f(y) y α k 1 z ) k + α f(y) k ζ k. (3) 7

Multiplying through by (f(y)/ ) yields f(y) Noting that x k+1 = x k+1 y f(y) 3.1 Proof of Theorem 3 x k y f(y) f(y) α k + αk 0 +1 x k+1 completes the proof. x k+1 ˆx x 0 ˆx +1 z 0 α i 0 α i ( ) f(y) ζ k. (4) We assume that for all i k, we have γ zi ( x i+1 ) > 0 (otherwise the theorem immediately holds by Proposition 3). Then the first k iterates of Algorithm 1 are well-defined. Consider any ˆx ˆX. Inductively applying Lemma 8 with y = ˆx produces ( ) ζ i. (5) Noting that x 0 = 0, this implies Minimizing over all ˆx ˆX, we have α i 0 ˆx + α i 0 dist(x 0, ˆX) + α i ( ) ζ i. (6) α i ( ) ζ i. (7) From Proposition, we have ζ i 1/R. Then rearrangement of this inequality gives i k 0 dist(x 0, ˆX) + 1 R k α i k ( ) α. (8) i Then Theorem 3 follows from the fact that f(x i ) < 0, as shown in Lemma 6. 3. Proof of Corollary 4 Corollary 4 follows directly from Theorem 3. By Proposition 3, the algorithm will correctly report unbounded objective if it ever encounters γ zi ( x i+1 ) = 0 (and thus the corollary holds). So we assume this never occurs, which implies the sequence of iterates x i is well-defined. By selecting α i = β i, the upper bound in Theorem 3 converges to zero: lim k dist(x 0, ˆX) + 1 R k α i k α i ( ) dist(x 0, = lim ˆX) + 1 k k R k β i β i = 0. 8

Then we have the following convergence result This implies lim desired result. k 3.3 Proof of Theorem 5 lim k i k f(x i ) 0 f(x i ) 0. i k f(x i). Considering a sequence of approaching f gives the Since we assume f is finite, we know all γ zi ( x i+1 ) /f > 0 (by Proposition 3). Thus the sequence of iterates x i is well-defined. Consider any k 0. Then taking (7) with = f gives α i f f 0 dist(x 0, X ) + Combining this with our choice of step size α i = f 0 f 1 ζ i yields ( ) 1 zi f ζ i dist(x 0, X ) + 0 α i ( ) f ζ i. (9) ( ) 1 zi f ζ i (10) 0 ( ) 1 zi f = ζ i dist(x 0, X ) (11) 0 ( ) 1 f = (k + 1) i k ζ i dist(x 0, X ). (1) 0 From Proposition, we know ζ i 1/R, and thus (zi ) f dist(x 0, X ) i k 0 R (k + 1). (13) From Lemma 6, we have f f(x i ), which completes our proof by implying f(xi ) f i k 3.4 Proof of Theorem 6 0 f dist(x 0, X ) R k + 1. (14) 4 dist(x Let k = 0, ˆX) 1. We assume γ 3 R ɛ zi ( x i+1 ) > 0 for all i k (otherwise the theorem immediately holds by Proposition 3). Then the first k iterates of Algorithm 1 are welldefined. Combining (7) with our choice of step size α i = ɛ ζ i yields ɛ ζ i dist(x 0, 0 z ˆX) + i 9 ( ) ɛ 4 ζ i (15)

( ɛ = ζ i ( 1 = ɛ(k + 1) i k ζ i ) ( ) 0 ɛ 4 ) ( 0 ɛ 4 dist(x 0, ˆX) (16) ) dist(x 0, ˆX). (17) If any i k has >, the theorem holds (as this would imply f(x i ) < 0). So we now assume / 1. Then, noting that ζ i 1/R, we can simplify our inequality to i k 0 ɛ dist(x 0, ˆX) 4 ɛr (k + 1). (18) Since f(x i ) from Lemma 6, we have the following, completing our proof of Theorem 6, f(x i ) i k 0 dist(x 0, ˆX) ɛr (k + 1) + ɛ 4. (19) Acknowledgments. The author wishes to express his deepest gratitude to Jim Renegar for numerous insightful discussions helping motivate this work, and for advising on both the presentation and positioning of this paper. References [1] James Renegar. Efficient subgradient methods for general convex optimization. SIAM Journal on Optimization, 6(4):649 676, 016. [] Patrick L. Combettes and Christian L. Müller. Perspective functions: Proximal calculus and applications in high-dimensional statistics. Journal of Mathematical Analysis and Applications, 016. [3] A. Y. Aravkin, J. V. Burke, D. Drusvyatskiy, M. P. Friedlander, and K. MacPhee. Foundations of gauge and perspective duality. February 017. preprint, arxiv:170.08649. [4] Robert M. Freund and Haihao Lu. New Computational Guarantees for Solving Convex Optimization Problems with First Order Methods, via a Function Growth Condition Measure. November 016. preprint, arxiv:1511.0974. [5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 004. p.89. 10