Technical report no. SOL Department of Management Science and Engineering, Stanford University, Stanford, California, September 15, 2013.

Size: px
Start display at page:

Download "Technical report no. SOL Department of Management Science and Engineering, Stanford University, Stanford, California, September 15, 2013."

Transcription

1 PROXIMAL NEWTON-TYPE METHODS FOR MINIMIZING COMPOSITE FUNCTIONS JASON D. LEE, YUEKAI SUN, AND MICHAEL A. SAUNDERS Abstract. We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many popular methods tailored to problems arising in bioinformatics, signal processing, and statistical learning are special cases of proximal Newton-type methods, and our analysis yields new convergence results for some of these methods. Technical report no. SOL Department of Management Science and Engineering, Stanford University, Stanford, California, September 15, Key words. convex optimization, nonsmooth optimization, proximal mapping AMS subject classifications. 65K05, 90C25, 90C53 1. Introduction. Many problems of relevance in bioinformatics, signal processing, and statistical learning can be formulated as minimizing a composite function: minimize f(x) := g(x) + h(x), (1.1) x R n where g is a convex, continuously differentiable loss function, and h is a convex but not necessarily differentiable penalty function or regularizer. Such problems include the lasso [24], the graphical lasso [10], and trace-norm matrix completion [5]. We describe a family of Newton-type methods for minimizing composite functions that achieve superlinear rates of convergence subject to standard assumptions. The methods can be interpreted as generalizations of the classic proximal gradient method that account for the curvature of the function when selecting a search direction. Many popular methods for minimizing composite functions are special cases of these proximal Newton-type methods, and our analysis yields new convergence results for some of these methods Notation. The methods we consider are line search methods, which means that they produce a sequence of points {x k } according to x k+1 = x k + t k x k, where t k is a step length and x k is a search direction. When we focus on one iteration of an algorithm, we drop the subscripts (e.g. x + = x+t x). All the methods we consider compute search directions by minimizing local quadratic models of the composite function f. We use an accent ˆ to denote these local quadratic models (e.g. ˆf k is a local quadratic model of f at the k-th step). J. Lee and Y. Sun contributed equally to this work. Institute for Computational and Mathematical Engineering, Stanford University, Stanford, California. Department of Management Science and Engineering, Stanford University, Stanford, California. A preliminary version of this work appeared in [14]. 1

2 2 J. LEE, Y. SUN, AND M. SAUNDERS 1.2. First-order methods. The most popular methods for minimizing composite functions are first-order methods that use proximal mappings to handle the nonsmooth part h. SpaRSA [27] is a popular spectral projected gradient method that uses a spectral step length together with a nonmonotone line search to improve convergence. TRIP [13] also uses a spectral step length but selects search directions using a trust-region strategy. We can accelerate the convergence of first-order methods using ideas due to Nesterov [16]. This yields accelerated first-order methods, which achieve ɛ-suboptimality within O(1/ ɛ) iterations [25]. The most popular method in this family is the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) [1]. These methods have been implemented in the package TFOCS [3] and used to solve problems that commonly arise in statistics, signal processing, and statistical learning Newton-type methods. There are two classes of methods that generalize Newton-type methods for minimizing smooth functions to handle composite functions (1.1). Nonsmooth Newton-type methods [28] successively minimize a local quadratic model of the composite function f: ˆf k (y) = f(x k ) + sup z T (y x k ) + 1 z f(x k ) 2 (y x k) T H k (y x k ), where H k accounts for the curvature of f. (Although computing this x k is generally not practical, we can exploit the special structure of f in many statistical learning problems.) Our proximal Newton-type methods approximates only the smooth part g with a local quadratic model: ˆf k (y) = g(x k ) + g(x k ) T (y x k ) (y x k) T H k (y x k ) + h(y). where H k is an approximation to 2 g(x k ). This idea can be traced back to the generalized proximal point method of Fukushima and Miné [11]. Many popular methods for minimizing composite functions are special cases of proximal Newton-type methods. Methods tailored to a specific problem include glmnet [9], LIBLINEAR [29], QUIC [12], and the Newton-LASSO method [17]. Generic methods include projected Newton-type methods [23, 22], proximal quasi-newton methods [21, 2], and the method of Tseng and Yun [26, 15]. There is a rich literature on solving generalized equations, monotone inclusions, and variational inequalities. Minimizing composite functions is a special case of solving these problems, and proximal Newton-type methods are special cases of Newtontype methods for these problems [18]. We refer to [19] for a unified treatment of descent methods (including proximal Newton-type methods) for such problems. 2. Proximal Newton-type methods. We seek to minimize composite functions f(x) := g(x)+h(x) as in (1.1). We assume g and h are closed, convex functions. g is continuously differentiable, and its gradient g is Lipschitz continuous. h is not necessarily everywhere differentiable, but its proximal mapping (2.1) can be evaluated efficiently. We refer to g as the smooth part and h as the nonsmooth part. We assume the optimal value f is attained at some optimal solution x, not necessarily unique The proximal gradient method. The proximal mapping of a convex function h at x is prox h (x) := arg min y R n h(y) y x 2. (2.1)

3 PROXIMAL NEWTON-TYPE METHODS 3 Proximal mappings can be interpreted as generalized projections because if h is the indicator function of a convex set, then prox h (x) is the projection of x onto the set. If h is the l 1 norm and t is a step-length, then prox th (x) is the soft-threshold operation: prox tl1 (x) = sign(x) max{ x t, 0}, where sign and max are entry-wise, and denotes the entry-wise product. The proximal gradient method uses the proximal mapping of the nonsmooth part to minimize composite functions: x k+1 = x k t k G tk f (x k ) G tk f (x k ) := 1 t k ( xk prox tk h(x k t k g(x k )) ), where t k denotes the k-th step length and G tk f (x k ) is a composite gradient step. Most first-order methods, including SpaRSA and accelerated first-order methods, are variants of this simple method. We note three properties of the composite gradient step: 1. G tk f (x k ) steps to the minimizer of h plus a simple quadratic model of g near x k : x k+1 = prox tk h (x k t k g(x k )) (2.2) = arg min y = arg min y t k h(y) y x k + t k g(x k ) 2 (2.3) g(x k ) T (y x k ) + 1 2t k y x k 2 + h(y). (2.4) 2. G tk f (x k ) is neither a gradient nor a subgradient of f at any point; rather it is the sum of an explicit gradient and an implicit subgradient: G tk f (x k ) g(x k ) + h(x k+1 ). 3. G tk f (x) is zero if and only if x minimizes f. The third property generalizes the zero gradient optimality condition for smooth functions to composite functions. We shall use the length of G f (x) to measure the optimality of a point x. Lemma 2.1. If g is Lipschitz continuous with constant L 1, then G f (x) satisfies: G f (x) (L 1 + 1) x x. Proof. The composite gradient steps at x k and the optimal solution x satisfy G f (x k ) g(x k ) + h(x k G f (x k )) G f (x ) g(x ) + h(x ). We subtract these two expressions and rearrange to obtain h(x k G f (x k )) h(x ) G f (x) ( g(x) g(x )).

4 4 J. LEE, Y. SUN, AND M. SAUNDERS h is monotone, hence 0 (x G f (x) x ) T h(x k G f (x k )) = G f (x) T G f (x) + (x x )G f (x) + G f (x) T ( g(x) g(x )) (x x ) T ( g(x) g(x )). We drop the last term because it is nonnegative ( g is monotone) to obtain 0 G f (x) 2 + (x x )G f (x) + G f (x) T ( g(x) g(x )) G f (x) 2 G f (x) ( x x + g(x) g(x ) ). We rearrange to obtain G f (x) x x + g(x) g(x ). g is Lipschitz continuous, hence G f (x) (L 1 + 1) x x Proximal Newton-type methods. Proximal Newton-type methods use a local quadratic model (in lieu of the simple quadratic model in the proximal gradient method (2.4)) to account for the curvature of g. A local quadratic model of g at x k is ĝ k (y) = g(x k ) T (x x k ) (y x k) T H k (y x k ), where H k denotes an approximation to 2 g(x k ). direction x k solves the subproblem A proximal Newton-type search x k = arg min d ˆf k (x k + d) := ĝ k (x k + d) + h(x k + d). (2.5) There are many strategies for choosing H k. If we choose H k to be 2 g(x k ), then we obtain the proximal Newton method. If we build an approximation to 2 g(x k ) using changes measured in g according to a quasi-newton strategy, we obtain a proximal quasi-newton method. If the problem is large, we can use limited memory quasi-newton updates to reduce memory usage. Generally speaking, most strategies for choosing Hessian approximations for Newton-type methods (for minimizing smooth functions) can be adapted to choosing H k in the context of proximal Newtontype methods. We can also express a proximal Newton-type search direction using scaled proximal mappings. This lets us interpret a proximal Newton-type search direction as a composite Newton step and reveals a connection with the composite gradient step. Definition 2.2. Let h be a convex function and H, a positive definite matrix. Then the scaled proximal mapping of h at x is prox H h (x) := arg min y R n h(y) y x 2 H. (2.6) Scaled proximal mappings share many properties with (unscaled) proximal mappings:

5 PROXIMAL NEWTON-TYPE METHODS 5 1. prox H h (x) exists and is unique for x dom h because the proximity function is strongly convex if H is positive definite. 2. Let h(x) be the subdifferential of h at x. Then prox H h (x) satisfies H ( x prox H h (x) ) h ( prox H h (x) ). (2.7) 3. prox H h (x) is firmly nonexpansive in the H-norm. That is, if u = proxh h (x) and v = prox H h (y), then (u v) T H(x y) u v 2 H, and the Cauchy-Schwarz inequality implies u v H x y H. We can express a proximal Newton-type search direction as a composite Newton step using scaled proximal mappings: x = prox H h ( x H 1 g(x) ) x. (2.8) We use (2.7) to deduce that a proximal Newton search direction satisfies We simplify to obtain H ( H 1 g(x) x ) h(x + x). H x g(x) h(x + x). (2.9) Thus a proximal Newton-type search direction, like the composite gradient step, combines an explicit gradient with an implicit subgradient. Note this expression yields the Newton system in the case of smooth functions (i.e., h is zero). Proposition 2.3 (Search direction properties). If H is positive definite, then x in (2.5) satisfies f(x + ) f(x) + t ( g(x) T x + h(x + x) h(x) ) + O(t 2 ), (2.10) g(x) T x + h(x + x) h(x) x T H x. (2.11) Proof. For t (0, 1], f(x + ) f(x) = g(x + ) g(x) + h(x + ) h(x) g(x + ) g(x) + th(x + x) + (1 t)h(x) h(x) = g(x + ) g(x) + t(h(x + x) h(x)) = g(x) T (t x) + t(h(x + x) h(x)) + O(t 2 ), which proves (2.10). Since x steps to the minimizer of ˆf (2.5), t x satisfies g(x) T x xt H x + h(x + x) g(x) T (t x) t2 x T H x + h(x + ) t g(x) T x t2 x T H x + th(x + x) + (1 t)h(x).

6 6 J. LEE, Y. SUN, AND M. SAUNDERS We rearrange and then simplify: (1 t) g(x) T x (1 t2 ) x T H x + (1 t)(h(x + x) h(x)) 0 g(x) T x (1 + t) xt H x + h(x + x) h(x) 0 g(x) T x + h(x + x) h(x) 1 2 (1 + t) xt H x. Finally, we let t 1 and rearrange to obtain (2.11). Proposition 2.3 implies the search direction is a descent direction for f because we can substitute (2.11) into (2.10) to obtain f(x + ) f(x) t x T H x + O(t 2 ). (2.12) Proposition 2.4. Suppose H is positive definite. Then x is an optimal solution if and only if at x the search direction x (2.5) is zero. Proof. If x at x is nonzero, it is a descent direction for f at x. Hence, x cannot be a minimizer of f. If x = 0, then x is the minimizer of ˆf. Thus g(x) T (td) t2 d T Hd + h(x + td) h(x) 0 for all t > 0 and d. We rearrange to obtain h(x + td) h(x) t g(x) T d 1 2 t2 d T Hd. (2.13) Let Df(x, d) be the directional derivative of f at x in the direction d: Df(x, d) = lim t 0 f(x + td) f(x) t We substitute (2.13) into (2.14) to obtain g(x + td) g(x) + h(x + td) h(x) = lim t 0 t t g(x) T d + O(t 2 ) + h(x + td) h(x) = lim. (2.14) t 0 t t g(x) T d + O(t 2 ) 1 2 Df(x, u) lim t2 d T Hd t g(x) T d t 0 t 1 2 = lim t2 d T Hd + O(t 2 ) = 0. t 0 t Since f is convex, x is an optimal solution if and only if x = 0. In a few special cases we can derive a closed form expression for the proximal Newton search direction, but we must usually resort to an iterative method. The user should choose an iterative method that exploits the properties of h. E.g., if h is the l 1 norm, then (block) coordinate descent methods combined with an active set strategy are known to be very efficient for these problems [9]. We use a line search procedure to select a step length t that satisfies a sufficient descent condition: f(x + ) f(x) + αt (2.15) := g(x) T x + h(x + x) h(x), (2.16)

7 PROXIMAL NEWTON-TYPE METHODS 7 where α (0, 0.5) can be interpreted as the fraction of the decrease in f predicted by linear extrapolation that we will accept. A simple example of a line search procedure is called backtracking line search [4]. Lemma 2.5. Suppose H mi for some m > 0 and g is Lipschitz continuous with constant L 1. Then there exists κ such that t min {1, 2κ } (1 α) (2.17) satisfies the sufficient descent condition (2.16). Proof. We can bound the decrease at each iteration by f(x + ) f(x) = g(x + ) g(x) + h(x + ) h(x) 1 0 g(x + s(t x)) T (t x)ds + th(x + x) + (1 t)h(x) h(x) = g(x) T (t x) + t(h(x + x) h(x)) ( g(x + s(t x)) g(x)) T (t x)ds t ( g(x) T (t x) + h(x + x) h(x) 1 ) + g(x + s( x)) g(x) x ds. Since g is Lipschitz continuous with constant L 1, ( f(x + ) f(x) t g(x) T x + h(x + x) h(x) + L ) 1t 2 x 2 ( = t + L ) 1t 2 x 2, (2.18) where we use (2.11). If we choose t 2 κ (1 α), κ = L 1/m, then L 1 t 2 x 2 m(1 α) x 2 (1 α) x T H x (1 α), (2.19) where we again use (2.11). We substitute (2.19) into (2.18) to obtain f(x + ) f(x) t ( (1 α) ) = t(α ) Inexact proximal Newton-type methods. Inexact proximal Newtontype methods solve subproblem (2.5) approximately to obtain inexact search directions. These methods can be more efficient than their exact counterparts because they require less computational expense per iteration. In fact, many practical implementations of proximal Newton-type methods such as glmnet, LIBLINEAR, and QUIC use inexact search directions. In practice, how exactly (or inexactly) we solve the subproblem is critical to the efficiency and reliability of the method. The practical implementations of proximal Newton-type methods we mentioned use a variety of heuristics to decide how

8 8 J. LEE, Y. SUN, AND M. SAUNDERS Algorithm 1 A generic proximal Newton-type method Require: starting point x 0 dom f 1: repeat 2: Choose H k, a positive definite approximation to the Hessian. 3: Solve the subproblem for a search direction: x k arg min d g(x k ) T d dt H k d + h(x k + d). 4: Select t k with a backtracking line search. 5: Update: x k+1 x k + t k x k. 6: until stopping conditions are satisfied. accurately to solve the subproblem. Although these methods perform admirably in practice, there are few results on how inexact solutions to the subproblem affect their convergence behavior. First we propose an adaptive stopping condition for the subproblem. Then in section 3 we analyze the convergence behavior of inexact Newton-type methods. Finally, in section 4 we conduct computational experiments to compare the performance of our stopping condition against commonly used heuristics. Our stopping condition is motivated by the adaptive stopping condition used by inexact Newton-type methods for minimizing smooth functions: ĝ k (x k + x k ) η k g(x k ), (2.20) where η k is called a forcing term because it forces the left-hand side to be small. We generalize (2.20) to composite functions by substituting composite gradients into (2.20) and scaling the norm: ĝ k (x k + x k ) + h(x k + x k ) H 1 k η k G f (x k ) H 1 k. (2.21) We set η k based on how well ĝ k 1 approximates g near x k : { η k = min 0.1, ĝ } k 1(x k ) g(x k ). (2.22) g(x k 1 ) This choice due to Eisenstat and Walker [8] yields desirable convergence results and performs admirably in practice. Intuitively, we should solve the subproblem exactly if (i) x k is close to the optimal solution, and (ii) ˆf k is a good model of f near x k. If (i), then we seek to preserve the fast local convergence behavior of proximal Newton-type methods; if (ii), then minimizing ˆf k is a good surrogate for minimizing f. In these cases, (2.21) and (2.22) ensure the subproblem is solved accurately. We can derive an expression like (2.9) for an inexact search direction in terms of an explicit gradient, an implicit subgradient, and a residual term r k. This reveals connections to the inexact Newton search direction in the case of smooth problems. (2.21) is equivalent to 0 ĝ k (x k + x k ) + h(x k + x k ) + r k = g(x k ) + H k x k + h(x k + x k ) + r k, for some r k such that r k 2 g(x k ) η 1 k G f (x k ) H 1. Hence an inexact search k direction satisfies H x k g(x k ) h(x k + x k ) + r k. (2.23)

9 PROXIMAL NEWTON-TYPE METHODS 9 3. Convergence results. Our first result guarantees proximal Newton-type methods converge globally to some optimal solution x. We assume {H k } are sufficiently positive definite; i.e., H k mi for some m > 0. This assumption is required to guarantee the methods are executable, i.e. there exist step lengths that satisfy the sufficient descent condition (cf. Lemma 2.5). Theorem 3.1. If H k mi for some m > 0, then x k converges to an optimal solution starting at any x 0 dom f. Proof. f(x k ) is decreasing because x k is always a descent direction (2.12) and there exist step lengths satisfying the sufficient descent condition (2.16) (cf. Lemma 2.5): f(x k ) f(x k+1 ) αt k k 0. f(x k ) must converge to some limit (we assumed f is closed and the optimal value is attained); hence t k k must decay to zero. t k is bounded away from zero because sufficiently small step lengths attain sufficient descent; hence k must decay to zero. We use (2.11) to deduce that x k also converges to zero: x k 2 1 m xt k H k x k 1 m k. x k is zero if and only if x is an optimal solution (cf. Proposition 2.4), hence x k converges to some x Convergence of the proximal Newton method. The proximal Newton method uses the exact Hessian of the smooth part g in the second-order model of f, i.e. H k = 2 g(x k ). This method converges q-quadratically: x k+1 x = O ( x k x 2 ), subject to standard assumptions on the smooth part: we require g to be locally strongly convex and 2 g to be locally Lipschitz continuous, i.e. g is strongly convex and Lipschitz continuous in a ball around x. These are standard assumptions for proving that Newton s method for minimizing smooth functions converges q-quadratically. First, we prove an auxiliary result: step lengths of unity satisfy the sufficient descent condition after sufficiently many iterations. Lemma 3.2. Suppose (i) g is locally strongly convex with constant m and (ii) 2 g is locally Lipschitz continuous with constant L 2. If we choose H k = 2 g(x k ), then the unit step length satisfies the sufficient decrease condition (2.16) for k sufficiently large. Proof. Since 2 g is locally Lipschitz continuous with constant L 2, g(x + x) g(x) + g(x) T x xt 2 g(x) x + L 2 6 x 3. We add h(x + x) to both sides to obtain f(x + x) g(x) + g(x) T x xt 2 g(x) x + L 2 6 x 3 + h(x + x).

10 10 J. LEE, Y. SUN, AND M. SAUNDERS We then add and subtract h(x) from the right-hand side to obtain f(x + x) g(x) + h(x) + g(x) T x + h(x + x) h(x) xt 2 g(x) x + L 2 6 x 3 f(x) xt 2 g(x) x + L 2 6 x 3 f(x) L 2 x, 6m where we use (2.11) and (2.16). We rearrange to obtain f(x + x) f(x) xt 2 g(x) x L 2 6m x ( 1 2 L ) 2 + o ( x 2 ). 6m We can show x k decays to zero via the same argument that we used to prove Theorem 3.1. Hence, if k is sufficiently large, f(x k + x k ) f(x k ) < 1 2 k. Theorem 3.3. Suppose (i) g is locally strongly convex with constant m, and (ii) 2 g is locally Lipschitz continuous with constant L 2. Then the proximal Newton method converges q-quadratically to x. Proof. The assumptions of Lemma 3.2 are satisfied; hence unit step lengths satisfy the sufficient descent condition after sufficiently many steps: prox 2 g(x k ) h x k+1 = x k + x k = prox 2 g(x k ) ( h xk 2 g(x k ) 1 g(x k ) ). is firmly nonexpansive in the 2 g(x k )-norm, hence x k+1 x 2 g(x k ) = prox 2 g(x k ) h (x k 2 g(x k ) 1 g(x k )) prox 2 g(x k ) h (x 2 g(x k ) 1 g(x )) 2 g(x k ) xk x + 2 g(x k ) 1 ( g(x ) g(x k )) 2 g(x k ) 1 2 g(x k )(x k x ) g(x k ) + g(x ). m 2 g is locally Lipschitz continuous with constant L 2 ; hence 2 g(x k )(x k x ) g(x k ) + g(x ) L 2 2 x k x 2. We deduce that x k converges to x quadratically: x k+1 x 1 m x k+1 x 2 g(x k ) L 2 2m x k x 2.

11 PROXIMAL NEWTON-TYPE METHODS Convergence of proximal quasi-newton methods. If the sequence {H k } satisfy the Dennis-Moré criterion [7], namely ( H k 2 g(x ) ) (x k+1 x k ) 0, (3.1) x k+1 x k then we can prove that a proximal quasi-newton method converges q-superlinearly: x k+1 x o( x k x ). We also require g to be locally strongly convex and 2 g to be locally Lipschitz continuous. These are the same assumptions required to prove quasi-newton methods for minimizing smooth functions converge superlinearly. First, we prove two auxiliary results: (i) step lengths of unity satisfy the sufficient descent condition after sufficiently many iterations, and (ii) the proximal quasi- Newton step is close to the proximal Newton step. Lemma 3.4. Suppose g is twice continuously differentiable and 2 g is locally Lipschitz continuous with constant L 2. If {H k } satisfy the Dennis-Moré criterion and their eigenvalues are bounded, then the unit step length satisfies the sufficient descent condition (2.16) after sufficiently many iterations. Proof. The proof is very similar to the proof of Lemma 3.2, and we defer the details to Appendix A. The proof of the next result mimics the analysis of Tseng and Yun [26]. Proposition 3.5. Suppose H and Ĥ are positive definite matrices with bounded eigenvalues: mi H MI and ˆmI Ĥ ˆMI. Let x and ˆx be the search directions generated using H and Ĥ respectively: x = prox H h ˆx = proxĥh ( x H 1 g(x) ) x, ( ) x Ĥ 1 g(x) x. Then there exists θ such that these two search directions satisfy 1 + θ x ˆx (Ĥ m H) x 1/2 x 1/2. Proof. By (2.5) and Fermat s rule, x and ˆx are also the solutions to Hence x and ˆx satisfy and x = arg min d ˆx = arg min d g(x) T d + x T Hd + h(x + d), g(x) T d + ˆx T Ĥd + h(x + d). g(x) T x + x T H x + h(x + x) g(x) T ˆx + ˆx T H ˆx + h(x + ˆx) g(x) T ˆx + ˆx T Ĥ ˆx + h(x + ˆx) g(x) T x + x T Ĥ x + h(x + x).

12 12 J. LEE, Y. SUN, AND M. SAUNDERS We sum these two inequalities and rearrange to obtain x T H x x T (H + Ĥ) ˆx + ˆxT Ĥ ˆx 0. We then complete the square on the left side and rearrange to obtain x T H x 2 x T H ˆx + ˆx T H ˆx x T (Ĥ H) ˆx + ˆxT (H Ĥ) ˆx. The left side is x ˆx 2 H and the eigenvalues of H are bounded. Thus x ˆx 1 ( ) 1/2 x T (Ĥ H) x + m ˆxT (H Ĥ) ˆx 1 m ( Ĥ H) ˆx 1/2 ( x + ˆx ) 1/2. (3.2) We use a result due to Tseng and Yun (cf. Lemma 3 in [26]) to bound the term ( x + ˆx ). Let P denote Ĥ 1/2HĤ 1/2. Then x and ˆx satisfy ˆM (1 + λ max (P ) + ) 1 2λ min (P ) + λ max (P ) 2 x ˆx. 2m We denote the constant in parentheses by θ and conclude that x + ˆx (1 + θ) ˆx. (3.3) We substitute (3.3) into (3.2) to obtain x ˆx θ ( Ĥ H) ˆx 1/2 ˆx 1/2. m Theorem 3.6. Suppose (i) g is twice continuously differentiable and locally strongly convex, (ii) 2 g is locally Lipschitz continuous with constant L 2. If {H k } satisfy the Dennis-Moré criterion and their eigenvalues are bounded, then a proximal quasi-newton method converges q-superlinearly to x. Proof. The assumptions of Lemma 3.4 are satisfied; hence unit step lengths satisfy the sufficient descent condition after sufficiently many iterations: x k+1 = x k + x k. Since the proximal Newton method converges q-quadratically (cf. Theorem 3.3), x k+1 x xk + x nt k x + xk x nt k L 2 x nt k m x 2 + xk x nt k, (3.4) where x nt k denotes the proximal-newton search direction. We use Proposition 3.5 to bound the second term: xk x nt k 1 + θ m ( 2 g(x k ) H k ) x k 1/2 xk 1/2. (3.5)

13 PROXIMAL NEWTON-TYPE METHODS 13 2 g is Lipschitz continuous and x k satisfies the Dennis-Moré criterion; hence ( 2 g(x k ) H k ) xk ( 2 g(x k ) 2 g(x ) ) x k + ( 2 g(x ) H k ) xk L 2 x k x x k + o( x k ). x k is within some constant θ k of x nt k (cf. Lemma 3 in [26]), and we know the proximal Newton method converges q-quadratically. Thus x k θ k x nt k = θ k x nt k+1 x k θ ( k x nt k+1 x + x k x ) O ( x k x 2 ) + θ k x k x. We substitute these expressions into (3.5) to obtain xk x nt k = o( xk x ). We substitute this expression into (3.4) to obtain x k+1 x L 2 x nt k x 2 + o( x k x ), m and we deduce that x k converges to x superlinearly Convergence of the inexact proximal Newton method. We make the same assumptions made by Dembo et al. in their analysis of inexact Newton methods for minimizing smooth functions [6]: (i) x k is close to x and (ii) the unit step length is eventually accepted. We prove the inexact proximal Newton method (i) converges q-linearly if the forcing terms η k are smaller than some η, and (ii) converges q-superlinearly if the forcing terms decay to zero. First, we prove a consequence of the smoothness of g. Then, we use this result to prove the inexact proximal Newton method converges locally subject to standard assumptions on g and η k. Lemma 3.7. Suppose g is locally strongly convex and 2 g is locally Lipschitz continuous. If x k sufficiently close to x, then for any x, x x 2 g(x ) (1 + ɛ) x x 2 g(x k ). Proof. We first expand 2 g(x ) 1/2 (x x ) to obtain 2 g(x ) 1/2 (x x ) ( = 2 g(x ) 1/2 2 g(x k ) 1/2) (x x ) + 2 g(x k ) 1/2 (x x ) ( = 2 g(x ) 1/2 2 g(x k ) 1/2) 2 g(x k ) 1/2 2 g(x k ) 1/2 (x x ) = + 2 g(x k ) 1/2 (x x ) ( ( I + 2 g(x ) 1/2 2 g(x k ) 1/2) 2 g(x k ) 1/2) 2 g(x k ) 1/2 (x x ). We take norms to obtain x x 2 g(x ) I + ( 2 g(x ) 1/2 2 g(x k ) 1/2) 2 g(x k ) 1/2 x x 2 g(x k ). (3.6)

14 14 J. LEE, Y. SUN, AND M. SAUNDERS If g is locally strongly convex with constant m and x k is sufficiently close to x, then 2 g(x ) 1/2 2 g(x k ) 1/2 mɛ. We substitute this bound into (3.6) to deduce that x x 2 g(x ) (1 + ɛ) x x 2 g(x k ). Theorem 3.8. Suppose (i) g is locally strongly convex with constant m, (ii) 2 g is locally Lipschitz continuous with constant L 2, and (iii) there exists L G such that the composite gradient step satisfies G f (x k ) 2 g(x k ) 1 L G x k x 2 g(x k ). (3.7) 1. If η k is smaller than some η < 1 L G, then an inexact proximal Newton method converges q-linearly to x. 2. If η k decays to zero, then an inexact proximal Newton method converges q- superlinearly to x. Proof. We use Lemma 3.7 to deduce x k+1 x 2 g(x ) (1 + ɛ 1) x k+1 x 2 g(x k ). (3.8) We use the monotonicity of h to bound x k+1 x 2 g(x k ). First, we have 2 g(x k )(x k+1 x k ) g(x k ) h(x k+1 ) + r k (3.9) (cf. (2.23)). Also, the exact proximal Newton step at x (trivially) satisfies 2 g(x k )(x x ) g(x ) h(x ). (3.10) Subtracting (3.10) from (3.9) and rearranging, we obtain h(x k+1 ) h(x ) Since h is monotone, 2 g(x k )(x k x k+1 x + x ) g(x k ) + g(x ) + r k. 0 (x k+1 x ) T ( h(x k+1 ) h(x )) = (x k+1 x ) 2 g(x k )(x x k+1 ) + (x k+1 x ) T ( 2 g(x k )(x k x ) g(x k ) + g(x ) + r k ) = (x k+1 x ) T 2 g(x k ) ( x k x + 2 g(x k ) 1 ( g(x ) g(x k ) + r k ) ) x k+1 x 2 g(x k ). We taking norms to obtain x k+1 x 2 g(x k ) xk x + 2 g(x k ) 1 ( g(x ) g(x k )) 2 g(x k ) + η k r k 2 g(x k ) 1. If x k is sufficiently close to x, for any ɛ 2 > 0 we have x k x + 2 g(x k ) 1 ( g(x ) g(x k )) 2 g(x k ) ɛ 2 x k x g 2 (x )

15 PROXIMAL NEWTON-TYPE METHODS 15 and hence x k+1 x 2 g(x ) 2 g(x ) ɛ 2 x k x g 2 (x ) + η k r k 2 g(x k ) 1. (3.11) We substitute (3.11) into (3.8) to obtain x k+1 x 2 g(x ) (1 + ɛ 1) ( ɛ 2 x k x g2 (x ) + η k r k 2 g(x k ) 1 ). (3.12) Since x k satisfies the adaptive stopping condition (2.21), r k 2 g(x k ) 1 η k G f (x k ) 2 g(x k ) 1, and since there exists L G such that G f satisfies (3.7), We substitute (3.13) into (3.12) to obtain r k 2 g(x k ) 1 η kl G x k x g2 (x ). (3.13) x k+1 x 2 g(x ) (1 + ɛ 1)(ɛ 2 + η k L G ) x k x g2 (x ). If η k is smaller than some η < 1 ( ) 1 L G (1 + ɛ 1 ) ɛ 2 < 1 L G then x k converges q-linearly to x. If η k decays to zero (the smoothness of g lets ɛ 1, ɛ 2 decay to zero), then x k converges q-superlinearly to x. If we assume g is twice continuously differentiable, we can derive an expression for L G. Combining this result with Theorem 3.8 we deduce the convergence of an inexact Newton method with our adaptive stopping condition (2.21). Lemma 3.9. Suppose g is locally strongly convex with constant m, and 2 g is locally Lipschitz continuous. If x k is sufficiently close to x, there exists κ such that G f (x k ) 2 g(x k ) 1 Proof. Since G f (x ) is zero, ( κ(1 + ɛ) + 1 m ) x k x 2 g(x k ). G f (x k ) 2 g(x k ) 1 1 m G f (x k ) G f (x ) 1 m g(x k ) g(x ) + 1 m x k x κ g(x k ) g(x ) 2 g(x k ) m x k x 2 g(x k ), (3.14) where κ = L 2 /m. The second inequality follows from Lemma 2.1. We split g(x k ) g(x ) 2 g(x k ) 1 into two terms: g(x k ) g(x ) 2 g(x k ) 1 = g(xk ) g(x ) + 2 g(x k )(x x k ) 2 g(x k ) + x k x 2 g(x k ).

16 16 J. LEE, Y. SUN, AND M. SAUNDERS If x k is sufficiently close to x, for any ɛ 1 > 0 we have g(xk ) g(x ) + 2 g(x k )(x x k ) 2 g(x k ) 1 ɛ 1 x k x 2 g(x k ). Hence g(x k ) g(x ) 2 g(x k ) 1 (1 + ɛ 1) x k x 2 g(x k ). We substituting this bound into (3.14) to obtain ( κ(1 G f (x k ) 2 g(x k ) + 1 ɛ1 ) + 1 ) x k x m 2 g(x k ). We use Lemma 3.7 to deduce ( κ(1 G f (x k ) 2 g(x k ) (1 + ɛ 1 2 ) + ɛ1 ) + 1 m ( κ(1 1 + ɛ) + m ) x k x 2 g(x k ) ) x k x 2 g(x k ). Corollary Suppose (i) g is locally strongly convex with constant m, and (ii) 2 g is locally Lipschitz continuous with constant L If η k is smaller than some η < κ+1/m, an inexact proximal Newton method converges q-linearly to x. 2. If η k decays to zero, an inexact proximal Newton method converges q-superlinearly to x. Remark. In many cases, we can obtain tighter bounds on G f (x k ) 2 g(x k ). 1 E.g., when minimizing smooth functions (h is zero), we can show G f (x k ) 2 g(x k ) 1 = g(x k ) 2 g(x k ) 1 (1 + ɛ) x k x 2 g(x k ). This yields the classic result of Dembo et al.: if η k is uniformly smaller than one, then the inexact Newton method converges q-linearly. Finally, we justify our choice of forcing terms: if we choose η k according to (2.22), then the inexact proximal Newton method converges q-superlinearly. Theorem Suppose (i) x 0 is sufficiently close to x, and (ii) the assumptions of Theorem 3.3 are satisfied. If we choose η k according to (2.22), then the inexact proximal Newton method converges q-superlinearly. Proof. Since the assumptions of Theorem 3.3 are satisfied, x k converges locally to x. Also, since 2 g is Lipschitz continuous, g(x k ) g(x k 1 ) 2 g(x k 1 ) x k 1 ( 1 2 g(x k 1 + s x k 1 ) 2 g(x ) ) ds x k g(x ) 2 g(x k 1 ) x k 1 ( 1 ) L 2 x k 1 + s x k 1 x ds x k L 2 x k 1 x x k 1.

17 We integrate the first term to obtain 1 0 PROXIMAL NEWTON-TYPE METHODS 17 L 2 x k 1 + s x k 1 x ds = L 2 x k 1 x + L 2 2 x k 1. We substituting these expressions into (2.22) to obtain η k L 2 (2 x k 1 x + 1 ) 2 x xk 1 k 1 g(x k 1 ). (3.15) If g(x ) 0, g(x) is bounded away from zero in a neighborhood of x. Hence η k decays to zero and x k converges q-superlinearly to x. Otherwise, g(x k 1 ) = g(x k 1 ) g(x ) m x k 1 x. (3.16) We substitute (3.15) and (3.16) into (2.22) to obtain The triangle inequality yields η k L ( 2 2 x k 1 x + x ) k 1 xk 1 m 2 x k 1 x. (3.17) We divide by x k 1 x to obtain x k 1 x k x + x k 1 x. x k 1 x k 1 x 1 + x k x x k 1 x. If k is sufficiently large, x k converges q-linearly to x and hence x k 1 x k 1 x 2. We substitute this expression into (3.17) to obtain η k L 2 m (4 x k 1 x + x k 1 ). Hence η k decays to zero, and x k converges q-superlinearly to x. 4. Computational experiments. First we explore how inexact search directions affect the convergence behavior of proximal Newton-type methods on a problem in bioinfomatics. We show that choosing the forcing terms according to (2.22) avoids oversolving the subproblem. Then we demonstrate the performance of proximal Newton-type methods using a problem in statistical learning. We show that the methods are suited to problems with expensive smooth function evaluations Inverse covariance estimation. Suppose we are given i.i.d. samples x (1),..., x (m) from a Gaussian Markov random field (MRF) with unknown inverse covariance matrix Θ: Pr(x; Θ) exp(x T Θx/2 log det( Θ)).

18 18 J. LEE, Y. SUN, AND M. SAUNDERS Relative suboptimality adaptive maxiter = 10 exact Function evaluations Relative suboptimality adaptive maxiter = 10 exact Time (sec) Fig. 4.1: Inverse covariance estimation problem (Estrogen dataset). Convergence behavior of proximal BFGS method with three subproblem stopping conditions. We seek a sparse maximum likelihood estimate of the inverse covariance matrix: ) ˆΘ := arg min tr (ˆΣΘ log det(θ) + λ vec(θ) 1, (4.1) Θ R n n where ˆΣ denotes the sample covariance matrix. We regularize using an entry-wise l 1 norm to avoid overfitting the data and promote sparse estimates. λ is a parameter that balances goodness-of-fit and sparsity. We use two datasets: (i) Estrogen, a gene expression dataset consisting of 682 probe sets collected from 158 patients, and (ii) Leukemia, another gene expression dataset consisting of 1255 genes from 72 patients. 1 The features of Estrogen were converted to log-scale and normalized to have zero mean and unit variance. λ was chosen to match the values used in [20]. We solve the inverse covariance estimation problem (4.1) using a proximal BFGS method, i.e. H k is updated according to the BFGS updating formula. To explore how inexact search directions affect the convergence behavior, we use three rules to decide how accurately to solve subproblem (2.5): 1. adaptive: stop when the adaptive stopping condition (2.21) is satisfied; 2. exact: solve subproblem exactly; 3. stop after 10 iterations. We plot relative suboptimality versus function evaluations and time on the Estrogen dataset in Figure 4.1 and the Leukemia dataset in Figure 4.2. On both datasets, the exact stopping condition yields the fastest convergence (ignoring computational expense per step), followed closely by the adaptive stopping condition (see Figure 4.1 and 4.2). If we account for time per step, then the adaptive stopping condition yields the fastest convergence. Note that the adaptive stopping condition yields superlinear convergence (like the exact proximal BFGS method). The third (stop after 10 iterations) stopping condition yields only linear convergence (like a first-order method), and its convergence rate is affected by the condition number of ˆΘ. On the Leukemia dataset, the condition number is worse and the convergence is slower. 1 These datasets are available from with the SPIN- COVSE package.

19 PROXIMAL NEWTON-TYPE METHODS 19 Relative suboptimality adaptive maxiter = 10 exact Function evaluations Relative suboptimality adaptive maxiter = 10 exact Time (sec) Fig. 4.2: Inverse covariance estimation problem (Leukemia dataset). Convergence behavior of proximal BFGS method with three subproblem stopping conditions Logistic regression. Suppose we are given samples x (1),..., x (m) with labels y (1),..., y (m) {0, 1}. We fit a logit model to our data: 1 minimize w R n m m log(1 + exp( y i w T x i )) + λ w 1. (4.2) i=1 Again, the regularization term w 1 promotes sparse solutions and λ balances goodnessof-fit and sparsity. We use two datasets: (i) gisette, a handwritten digits dataset from the NIPS 2003 feature selection challenge (n = 5000), and (ii) rcv1, an archive of categorized news stories from Reuters (n = 47, 000). 2 The features of gisette have be scaled to be within the interval [ 1, 1], and those of rcv1 have be scaled to be unit vectors. λ was chosen to match the value reported in [29], where it was chosen by five-fold cross validation on the training set. We compare a proximal L-BFGS method with SpaRSA and the TFOCS implementation of FISTA (also Nesterov s 1983 method) on problem (4.2). We plot relative suboptimality versus function evaluations and time on the gisette dataset in Figure 4.3 and on the rcv1 dataset in Figure 4.4. The smooth part requires many expensive exp/log operations to evaluate. On the dense gisette dataset (30 million nonzero entries in a 6000 by 5000 design matrix), evaluating g dominates the computational cost. The proximal L-BFGS method clearly outperforms the other methods because the computational expense is shifted to solving the subproblems, whose objective functions are cheap to evaluate (see Figure 4.3). On the sparse rcv1 dataset (40 million nonzero entries in a 542,000 by 47,000 design matrix), the cost of evaluating g makes up a smaller portion of the total cost, and the proximal L-BFGS method barely outperforms SpaRSA (see Figure 4.4) Software: PNOPT. The methods described in this work has been incorporated into a Matlab package PNOPT (Proximal Newton OPTimizer, pronounced pee-en-opt) and made publicly available from the Systems Optimization Laboratory 2 These datasets are available from datasets.

20 20 J. LEE, Y. SUN, AND M. SAUNDERS Relative suboptimality FISTA SpaRSA PN Function evaluations Relative suboptimality FISTA SpaRSA PN Time (sec) Fig. 4.3: Logistic regression problem (gisette dataset). Proximal L-BFGS method (L = 50) versus FISTA and SpaRSA. Relative suboptimality FISTA SpaRSA PN Function evaluations Relative suboptimality FISTA SpaRSA PN Time (sec) Fig. 4.4: Logistic regression problem (rcv1 dataset). Proximal L-BFGS method (L = 50) versus FISTA and SpaRSA. (SOL) website 3. It shares an interface with the software package TFOCS [3] and is compatible with the function generators included with TFOCS. We refer to the SOL website for details about PNOPT. 5. Conclusion. Given the popularity of first-order methods for minimizing composite functions, there has been a flurry of activity around the development of Newtontype methods for minimizing composite functions [12, 2, 17]. We analyze proximal Newton-type methods for such functions and show that they have several strong advantages over first-order methods: 1. They converge rapidly near the optimal solution, and can produce a solution of high accuracy. 2. They are insensitive to the choice of coordinate system and to the condition number of the level sets of the objective. 3. They scale well with problem size. 3

21 PROXIMAL NEWTON-TYPE METHODS 21 The main disadvantage is the cost of solving the subproblem. We have shown that it is possible to reduce the cost and retain the fast convergence rate by solving the subproblems inexactly. We hope our results kindle further interest in proximal Newton-type methods as an alternative to first-order and interior point methods for minimizing composite functions. Acknowledgements. We thank Santiago Akle, Trevor Hastie, Nick Henderson, Qiang Liu, Ernest Ryu, Ed Schmerling, Carlos Sing-Long, Walter Murray, and three anonymous referees for their insightful comments. J. Lee was supported by a National Defense Science and Engineering Graduate Fellowship (NDSEG) and an NSF Graduate Fellowship. Y. Sun and M. Saunders were partially supported by the DOE through the Scientific Discovery through Advanced Computing program, grant DE- FG02-09ER25917, and by the NIH, award number 1U01GM M. Saunders was also partially supported by the ONR, grant N Appendix A. Proofs. Lemma 3.4. Suppose g is twice continuously differentiable and 2 g is locally Lipschitz continuous with constant L 2. If {H k } satisfy the Dennis-Moré criterion (3.1) and their eigenvalues are bounded, then the unit step length satisfies the sufficient descent condition (2.16) after sufficiently many iterations. Proof. Since 2 g is locally Lipschitz continuous with constant L 2, g(x + x) g(x) + g(x) T x xt 2 g(x) x + L 2 6 x 3. We add h(x + x) to both sides to obtain f(x + x) g(x) + g(x) T x xt 2 g(x) x + L 2 6 x 3 + h(x + x). We then add and subtract h(x) from the right-hand side to obtain f(x + x) g(x) + h(x) + g(x) T x + h(x + x) h(x) xt 2 g(x) x + L 2 6 x 3 f(x) xt 2 g(x) x + L 2 6 x 3 f(x) xt 2 g(x) x + L 2 x, 6m where we use (2.11). We add and subtract 1 2 xt H x to yield f(x + x) f(x) xt ( 2 g(x) H ) x xt H x + L 2 6m x f(x) xt ( 2 g(x) H ) x (A.1) L 2 x, 6m

22 22 J. LEE, Y. SUN, AND M. SAUNDERS where we again use (2.11). 2 g is locally Lipschitz continuous and x satisfies the Dennis-Moré criterion. Thus, 1 2 xt ( 2 g(x) H ) x = 1 2 xt ( 2 g(x) 2 g(x ) ) x xt ( 2 g(x ) H ) x g(x) 2 g(x ) x ( 2 g(x ) H ) x x 2 L 2 2 x x x 2 + o ( x 2 ). We substitute this expression into (A.1) and rearrange to obtain f(x + x) f(x) o( x 2 ) + L 2 x. 6m We can show x k converges to zero via the argument used in the proof of Theorem 3.1. Hence, for k sufficiently large, f(x k + x k ) f(x k ) 1 2 k. REFERENCES [1] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci., 2(1): , [2] S. Becker and M.J. Fadili. A quasi-newton proximal splitting method. In Adv. Neural Inf. Process. Syst. (NIPS), [3] S.R. Becker, E.J. Candès, and M.C. Grant. Templates for convex cone problems with applications to sparse signal recovery. Math. Prog. Comp., 3(3): , [4] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, [5] E.J. Candès and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6): , [6] R.S. Dembo, S.C. Eisenstat, and T. Steihaug. Inexact Newton methods. SIAM J. Num. Anal., 19(2): , [7] J.E. Dennis and J.J. Moré. A characterization of superlinear convergence and its application to quasi-newton methods. Math. Comp., 28(126): , [8] S.C. Eisenstat and H.F. Walker. Choosing the forcing terms in an inexact Newton method. SIAM J. Sci. Comput., 17(1):16 32, [9] J. Friedman, T. Hastie, H. Höfling, and R. Tibshirani. Pathwise coordinate optimization. Ann. Appl. Stat., 1(2): , [10] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostat., 9(3): , [11] M. Fukushima and H. Mine. A generalized proximal point algorithm for certain non-convex minimization problems. Internat. J. Systems Sci., 12(8): , [12] C.J. Hsieh, M.A. Sustik, I.S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In Adv. Neural Inf. Process. Syst. (NIPS), [13] D. Kim, S. Sra, and I.S. Dhillon. A scalable trust-region algorithm with application to mixednorm regression. In Int. Conf. Mach. Learn. (ICML). Citeseer, [14] J.D. Lee, Y. Sun, and M.A. Saunders. Proximal Newton-type methods for minimizing composite functions. In Adv. Neural Inf. Process. Syst. (NIPS), pages , [15] Z. Lu and Y. Zhang. An augmented Lagrangian approach for sparse principal component analysis. Math. Prog. Ser. A, pages 1 45, [16] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course, volume 87. Springer, [17] P.A. Olsen, F. Oztoprak, J. Nocedal, and S.J. Rennie. Newton-like methods for sparse inverse covariance estimation. In Adv. Neural Inf. Process. Syst. (NIPS), [18] M. Patriksson. Cost approximation: a unified framework of descent algorithms for nonlinear programs. SIAM J. Optim., 8(2): , 1998.

23 PROXIMAL NEWTON-TYPE METHODS 23 [19] M. Patriksson. Nonlinear Programming and Variational Inequality Problems: A Unified Approach. Kluwer Academic Publishers, [20] B. Rolfs, B. Rajaratnam, D. Guillot, I. Wong, and A. Maleki. Iterative thresholding algorithm for sparse inverse covariance estimation. In Adv. Neural Inf. Process. Syst. (NIPS), pages , [21] M. Schmidt. Graphical model structure learning with l 1 -regularization. PhD thesis, University of British Columbia, [22] M. Schmidt, D. Kim, and S. Sra. Projected Newton-type methods in machine learning. In S. Sra, S. Nowozin, and S.J. Wright, editors, Optimization for Machine Learning. MIT Press, [23] M. Schmidt, E. Van Den Berg, M.P. Friedlander, and K. Murphy. Optimizing costly functions with simple constraints: A limited-memory projected quasi-newton algorithm. In Int. Conf. Artif. Intell. Stat. (AISTATS), [24] R. Tibshirani. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., pages , [25] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. unpublished, [26] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Math. Prog. Ser. B, 117(1): , [27] S.J. Wright, R.D. Nowak, and M.A.T. Figueiredo. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process., 57(7): , [28] J. Yu, S.V.N. Vishwanathan, S. Günter, and N.N. Schraudolph. A quasi-newton approach to nonsmooth convex optimization problems in machine learning. J. Mach. Learn. Res., 11: , [29] G.X. Yuan, C.H. Ho, and C.J. Lin. An improved glmnet for l 1 -regularized logistic regression. J. Mach. Learn. Res., 13: , 2012.

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization Proximal Newton Method Zico Kolter (notes by Ryan Tibshirani) Convex Optimization 10-725 Consider the problem Last time: quasi-newton methods min x f(x) with f convex, twice differentiable, dom(f) = R

More information

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725 Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h

More information

c 2014 Society for Industrial and Applied Mathematics

c 2014 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 24, No. 3, pp. 1420 1443 c 2014 Society for Industrial and Applied Mathematics PROXIMAL NEWTON-TYPE METHODS FOR MINIMIZING COMPOSITE FUNCTIONS JASON D. LEE, YUEKAI SUN, AND MICHAEL

More information

arxiv: v13 [stat.ml] 17 Mar 2014

arxiv: v13 [stat.ml] 17 Mar 2014 PROXIMAL NEWTON-TYPE METHODS FOR MINIMIZING COMPOSITE FUNCTIONS JASON D. LEE, YUEKAI SUN, AND MICHAEL A. SAUNDERS arxiv:1206.1623v13 [stat.ml] 17 Mar 2014 Abstract. We generalize Newton-type methods for

More information

Lecture 17: October 27

Lecture 17: October 27 0-725/36-725: Convex Optimiation Fall 205 Lecturer: Ryan Tibshirani Lecture 7: October 27 Scribes: Brandon Amos, Gines Hidalgo Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Optimization methods

Optimization methods Lecture notes 3 February 8, 016 1 Introduction Optimization methods In these notes we provide an overview of a selection of optimization methods. We focus on methods which rely on first-order information,

More information

Approximation. Inderjit S. Dhillon Dept of Computer Science UT Austin. SAMSI Massive Datasets Opening Workshop Raleigh, North Carolina.

Approximation. Inderjit S. Dhillon Dept of Computer Science UT Austin. SAMSI Massive Datasets Opening Workshop Raleigh, North Carolina. Using Quadratic Approximation Inderjit S. Dhillon Dept of Computer Science UT Austin SAMSI Massive Datasets Opening Workshop Raleigh, North Carolina Sept 12, 2012 Joint work with C. Hsieh, M. Sustik and

More information

Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization

Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Efficient Quasi-Newton Proximal Method for Large Scale Sparse Optimization Xiaocheng Tang Department of Industrial and Systems Engineering Lehigh University Bethlehem, PA 18015 xct@lehigh.edu Katya Scheinberg

More information

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained

NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS. 1. Introduction. We consider first-order methods for smooth, unconstrained NOTES ON FIRST-ORDER METHODS FOR MINIMIZING SMOOTH FUNCTIONS 1. Introduction. We consider first-order methods for smooth, unconstrained optimization: (1.1) minimize f(x), x R n where f : R n R. We assume

More information

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference Sparse Recovery using L1 minimization - algorithms Yuejie Chi Department of Electrical and Computer Engineering Spring

More information

Fast proximal gradient methods

Fast proximal gradient methods L. Vandenberghe EE236C (Spring 2013-14) Fast proximal gradient methods fast proximal gradient method (FISTA) FISTA with line search FISTA as descent method Nesterov s second method 1 Fast (proximal) gradient

More information

Optimization methods

Optimization methods Optimization methods Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda /8/016 Introduction Aim: Overview of optimization methods that Tend to

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2016) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725 Proximal Gradient Descent and Acceleration Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: subgradient method Consider the problem min f(x) with f convex, and dom(f) = R n. Subgradient method:

More information

Lasso: Algorithms and Extensions

Lasso: Algorithms and Extensions ELE 538B: Sparsity, Structure and Inference Lasso: Algorithms and Extensions Yuxin Chen Princeton University, Spring 2017 Outline Proximal operators Proximal gradient methods for lasso and its extensions

More information

Variables. Cho-Jui Hsieh The University of Texas at Austin. ICML workshop on Covariance Selection Beijing, China June 26, 2014

Variables. Cho-Jui Hsieh The University of Texas at Austin. ICML workshop on Covariance Selection Beijing, China June 26, 2014 for a Million Variables Cho-Jui Hsieh The University of Texas at Austin ICML workshop on Covariance Selection Beijing, China June 26, 2014 Joint work with M. Sustik, I. Dhillon, P. Ravikumar, R. Poldrack,

More information

6. Proximal gradient method

6. Proximal gradient method L. Vandenberghe EE236C (Spring 2013-14) 6. Proximal gradient method motivation proximal mapping proximal gradient method with fixed step size proximal gradient method with line search 6-1 Proximal mapping

More information

Conditional Gradient (Frank-Wolfe) Method

Conditional Gradient (Frank-Wolfe) Method Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties

More information

Newton-Like Methods for Sparse Inverse Covariance Estimation

Newton-Like Methods for Sparse Inverse Covariance Estimation Newton-Like Methods for Sparse Inverse Covariance Estimation Peder A. Olsen Figen Oztoprak Jorge Nocedal Stephen J. Rennie June 7, 2012 Abstract We propose two classes of second-order optimization methods

More information

A Unified Approach to Proximal Algorithms using Bregman Distance

A Unified Approach to Proximal Algorithms using Bregman Distance A Unified Approach to Proximal Algorithms using Bregman Distance Yi Zhou a,, Yingbin Liang a, Lixin Shen b a Department of Electrical Engineering and Computer Science, Syracuse University b Department

More information

Sparse Inverse Covariance Estimation for a Million Variables

Sparse Inverse Covariance Estimation for a Million Variables Sparse Inverse Covariance Estimation for a Million Variables Inderjit S. Dhillon Depts of Computer Science & Mathematics The University of Texas at Austin SAMSI LDHD Opening Workshop Raleigh, North Carolina

More information

Big & Quic: Sparse Inverse Covariance Estimation for a Million Variables

Big & Quic: Sparse Inverse Covariance Estimation for a Million Variables for a Million Variables Cho-Jui Hsieh The University of Texas at Austin NIPS Lake Tahoe, Nevada Dec 8, 2013 Joint work with M. Sustik, I. Dhillon, P. Ravikumar and R. Poldrack FMRI Brain Analysis Goal:

More information

Lecture 9: September 28

Lecture 9: September 28 0-725/36-725: Convex Optimization Fall 206 Lecturer: Ryan Tibshirani Lecture 9: September 28 Scribes: Yiming Wu, Ye Yuan, Zhihao Li Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These

More information

Agenda. Fast proximal gradient methods. 1 Accelerated first-order methods. 2 Auxiliary sequences. 3 Convergence analysis. 4 Numerical examples

Agenda. Fast proximal gradient methods. 1 Accelerated first-order methods. 2 Auxiliary sequences. 3 Convergence analysis. 4 Numerical examples Agenda Fast proximal gradient methods 1 Accelerated first-order methods 2 Auxiliary sequences 3 Convergence analysis 4 Numerical examples 5 Optimality of Nesterov s scheme Last time Proximal gradient method

More information

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique Master 2 MathBigData S. Gaïffas 1 3 novembre 2014 1 CMAP - Ecole Polytechnique 1 Supervised learning recap Introduction Loss functions, linearity 2 Penalization Introduction Ridge Sparsity Lasso 3 Some

More information

Spectral gradient projection method for solving nonlinear monotone equations

Spectral gradient projection method for solving nonlinear monotone equations Journal of Computational and Applied Mathematics 196 (2006) 478 484 www.elsevier.com/locate/cam Spectral gradient projection method for solving nonlinear monotone equations Li Zhang, Weijun Zhou Department

More information

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order

More information

Geometric Descent Method for Convex Composite Minimization

Geometric Descent Method for Convex Composite Minimization Geometric Descent Method for Convex Composite Minimization Shixiang Chen 1, Shiqian Ma 1, and Wei Liu 2 1 Department of SEEM, The Chinese University of Hong Kong, Hong Kong 2 Tencent AI Lab, Shenzhen,

More information

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization / Coordinate descent Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Adding to the toolbox, with stats and ML in mind We ve seen several general and useful minimization tools First-order methods

More information

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University

SIAM Conference on Imaging Science, Bologna, Italy, Adaptive FISTA. Peter Ochs Saarland University SIAM Conference on Imaging Science, Bologna, Italy, 2018 Adaptive FISTA Peter Ochs Saarland University 07.06.2018 joint work with Thomas Pock, TU Graz, Austria c 2018 Peter Ochs Adaptive FISTA 1 / 16 Some

More information

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss

An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss arxiv:1811.04545v1 [stat.co] 12 Nov 2018 Cheng Wang School of Mathematical Sciences, Shanghai Jiao

More information

Lecture 8: February 9

Lecture 8: February 9 0-725/36-725: Convex Optimiation Spring 205 Lecturer: Ryan Tibshirani Lecture 8: February 9 Scribes: Kartikeya Bhardwaj, Sangwon Hyun, Irina Caan 8 Proximal Gradient Descent In the previous lecture, we

More information

arxiv: v1 [math.oc] 1 Jul 2016

arxiv: v1 [math.oc] 1 Jul 2016 Convergence Rate of Frank-Wolfe for Non-Convex Objectives Simon Lacoste-Julien INRIA - SIERRA team ENS, Paris June 8, 016 Abstract arxiv:1607.00345v1 [math.oc] 1 Jul 016 We give a simple proof that the

More information

Stochastic Optimization Algorithms Beyond SG

Stochastic Optimization Algorithms Beyond SG Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Proximal-Gradient Mark Schmidt University of British Columbia Winter 2018 Admin Auditting/registration forms: Pick up after class today. Assignment 1: 2 late days to hand in

More information

A second order primal-dual algorithm for nonsmooth convex composite optimization

A second order primal-dual algorithm for nonsmooth convex composite optimization 2017 IEEE 56th Annual Conference on Decision and Control (CDC) December 12-15, 2017, Melbourne, Australia A second order primal-dual algorithm for nonsmooth convex composite optimization Neil K. Dhingra,

More information

Lecture 25: November 27

Lecture 25: November 27 10-725: Optimization Fall 2012 Lecture 25: November 27 Lecturer: Ryan Tibshirani Scribes: Matt Wytock, Supreeth Achar Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

Sparsity Regularization

Sparsity Regularization Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation

More information

Stochastic Quasi-Newton Methods

Stochastic Quasi-Newton Methods Stochastic Quasi-Newton Methods Donald Goldfarb Department of IEOR Columbia University UCLA Distinguished Lecture Series May 17-19, 2016 1 / 35 Outline Stochastic Approximation Stochastic Gradient Descent

More information

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J

r=1 r=1 argmin Q Jt (20) After computing the descent direction d Jt 2 dt H t d + P (x + d) d i = 0, i / J 7 Appendix 7. Proof of Theorem Proof. There are two main difficulties in proving the convergence of our algorithm, and none of them is addressed in previous works. First, the Hessian matrix H is a block-structured

More information

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming

Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Iterative Reweighted Minimization Methods for l p Regularized Unconstrained Nonlinear Programming Zhaosong Lu October 5, 2012 (Revised: June 3, 2013; September 17, 2013) Abstract In this paper we study

More information

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method

On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical

More information

Subgradient Method. Ryan Tibshirani Convex Optimization

Subgradient Method. Ryan Tibshirani Convex Optimization Subgradient Method Ryan Tibshirani Convex Optimization 10-725 Consider the problem Last last time: gradient descent min x f(x) for f convex and differentiable, dom(f) = R n. Gradient descent: choose initial

More information

Sparse Covariance Selection using Semidefinite Programming

Sparse Covariance Selection using Semidefinite Programming Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support

More information

Lecture 23: November 21

Lecture 23: November 21 10-725/36-725: Convex Optimization Fall 2016 Lecturer: Ryan Tibshirani Lecture 23: November 21 Scribes: Yifan Sun, Ananya Kumar, Xin Lu Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725 Gradient Descent Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: canonical convex programs Linear program (LP): takes the form min x subject to c T x Gx h Ax = b Quadratic program (QP): like

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,

More information

Convex Optimization Algorithms for Machine Learning in 10 Slides

Convex Optimization Algorithms for Machine Learning in 10 Slides Convex Optimization Algorithms for Machine Learning in 10 Slides Presenter: Jul. 15. 2015 Outline 1 Quadratic Problem Linear System 2 Smooth Problem Newton-CG 3 Composite Problem Proximal-Newton-CD 4 Non-smooth,

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Lecture 3-A - Convex) SUVRIT SRA Massachusetts Institute of Technology Special thanks: Francis Bach (INRIA, ENS) (for sharing this material, and permitting its use) MPI-IS

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms

Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Coordinate Update Algorithm Short Course Proximal Operators and Algorithms Instructor: Wotao Yin (UCLA Math) Summer 2016 1 / 36 Why proximal? Newton s method: for C 2 -smooth, unconstrained problems allow

More information

Randomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming

Randomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming Randomized Block Coordinate Non-Monotone Gradient Method for a Class of Nonlinear Programming Zhaosong Lu Lin Xiao June 25, 2013 Abstract In this paper we propose a randomized block coordinate non-monotone

More information

A direct formulation for sparse PCA using semidefinite programming

A direct formulation for sparse PCA using semidefinite programming A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon

More information

An accelerated algorithm for minimizing convex compositions

An accelerated algorithm for minimizing convex compositions An accelerated algorithm for minimizing convex compositions D. Drusvyatskiy C. Kempton April 30, 016 Abstract We describe a new proximal algorithm for minimizing compositions of finite-valued convex functions

More information

Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization

Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization Parallel Successive Convex Approximation for Nonsmooth Nonconvex Optimization Meisam Razaviyayn meisamr@stanford.edu Mingyi Hong mingyi@iastate.edu Zhi-Quan Luo luozq@umn.edu Jong-Shi Pang jongship@usc.edu

More information

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization Frank-Wolfe Method Ryan Tibshirani Convex Optimization 10-725 Last time: ADMM For the problem min x,z f(x) + g(z) subject to Ax + Bz = c we form augmented Lagrangian (scaled form): L ρ (x, z, w) = f(x)

More information

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation

A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation A Brief Overview of Practical Optimization Algorithms in the Context of Relaxation Zhouchen Lin Peking University April 22, 2018 Too Many Opt. Problems! Too Many Opt. Algorithms! Zero-th order algorithms:

More information

5. Subgradient method

5. Subgradient method L. Vandenberghe EE236C (Spring 2016) 5. Subgradient method subgradient method convergence analysis optimal step size when f is known alternating projections optimality 5-1 Subgradient method to minimize

More information

Newton s Method. Javier Peña Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725 Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and

More information

Sub-Sampled Newton Methods

Sub-Sampled Newton Methods Sub-Sampled Newton Methods F. Roosta-Khorasani and M. W. Mahoney ICSI and Dept of Statistics, UC Berkeley February 2016 F. Roosta-Khorasani and M. W. Mahoney (UCB) Sub-Sampled Newton Methods Feb 2016 1

More information

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization

This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization This can be 2 lectures! still need: Examples: non-convex problems applications for matrix factorization x = prox_f(x)+prox_{f^*}(x) use to get prox of norms! PROXIMAL METHODS WHY PROXIMAL METHODS Smooth

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

Active-set complexity of proximal gradient

Active-set complexity of proximal gradient Noname manuscript No. will be inserted by the editor) Active-set complexity of proximal gradient How long does it take to find the sparsity pattern? Julie Nutini Mark Schmidt Warren Hare Received: date

More information

Robust Inverse Covariance Estimation under Noisy Measurements

Robust Inverse Covariance Estimation under Noisy Measurements .. Robust Inverse Covariance Estimation under Noisy Measurements Jun-Kun Wang, Shou-De Lin Intel-NTU, National Taiwan University ICML 2014 1 / 30 . Table of contents Introduction.1 Introduction.2 Related

More information

Numerical Methods for Large-Scale Nonlinear Equations

Numerical Methods for Large-Scale Nonlinear Equations Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation

More information

Lecture 1: September 25

Lecture 1: September 25 0-725: Optimization Fall 202 Lecture : September 25 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Subhodeep Moitra Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation

Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation Properties of optimizations used in penalized Gaussian likelihood inverse covariance matrix estimation Adam J. Rothman School of Statistics University of Minnesota October 8, 2014, joint work with Liliana

More information

Sparse regression. Optimization-Based Data Analysis. Carlos Fernandez-Granda

Sparse regression. Optimization-Based Data Analysis.   Carlos Fernandez-Granda Sparse regression Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 3/28/2016 Regression Least-squares regression Example: Global warming Logistic

More information

c 2007 Society for Industrial and Applied Mathematics

c 2007 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 18, No. 1, pp. 106 13 c 007 Society for Industrial and Applied Mathematics APPROXIMATE GAUSS NEWTON METHODS FOR NONLINEAR LEAST SQUARES PROBLEMS S. GRATTON, A. S. LAWLESS, AND N. K.

More information

Proximal Minimization by Incremental Surrogate Optimization (MISO)

Proximal Minimization by Incremental Surrogate Optimization (MISO) Proximal Minimization by Incremental Surrogate Optimization (MISO) (and a few variants) Julien Mairal Inria, Grenoble ICCOPT, Tokyo, 2016 Julien Mairal, Inria MISO 1/26 Motivation: large-scale machine

More information

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28

Sparsity Models. Tong Zhang. Rutgers University. T. Zhang (Rutgers) Sparsity Models 1 / 28 Sparsity Models Tong Zhang Rutgers University T. Zhang (Rutgers) Sparsity Models 1 / 28 Topics Standard sparse regression model algorithms: convex relaxation and greedy algorithm sparse recovery analysis:

More information

Descent methods. min x. f(x)

Descent methods. min x. f(x) Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained

More information

QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models

QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep Ravikumar University of Texas at Austin Austin, TX 7872 USA {cjhsieh,inderjit,pradeepr}@cs.utexas.edu

More information

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization

A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization A Multilevel Proximal Algorithm for Large Scale Composite Convex Optimization Panos Parpas Department of Computing Imperial College London www.doc.ic.ac.uk/ pp500 p.parpas@imperial.ac.uk jointly with D.V.

More information

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices

A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22

More information

Cubic regularization of Newton s method for convex problems with constraints

Cubic regularization of Newton s method for convex problems with constraints CORE DISCUSSION PAPER 006/39 Cubic regularization of Newton s method for convex problems with constraints Yu. Nesterov March 31, 006 Abstract In this paper we derive efficiency estimates of the regularized

More information

Sparse Optimization Lecture: Dual Methods, Part I

Sparse Optimization Lecture: Dual Methods, Part I Sparse Optimization Lecture: Dual Methods, Part I Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know dual (sub)gradient iteration augmented l 1 iteration

More information

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications

A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth

More information

Stochastic Optimization: First order method

Stochastic Optimization: First order method Stochastic Optimization: First order method Taiji Suzuki Tokyo Institute of Technology Graduate School of Information Science and Engineering Department of Mathematical and Computing Sciences JST, PRESTO

More information

Relative-Continuity for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent

Relative-Continuity for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent Relative-Continuity for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent Haihao Lu August 3, 08 Abstract The usual approach to developing and analyzing first-order

More information

An iterative hard thresholding estimator for low rank matrix recovery

An iterative hard thresholding estimator for low rank matrix recovery An iterative hard thresholding estimator for low rank matrix recovery Alexandra Carpentier - based on a joint work with Arlene K.Y. Kim Statistical Laboratory, Department of Pure Mathematics and Mathematical

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 15 (Gradient methods III) 12 March, 2013 Suvrit Sra Optimal gradient methods 2 / 27 Optimal gradient methods We saw following efficiency estimates for

More information

Stochastic and online algorithms

Stochastic and online algorithms Stochastic and online algorithms stochastic gradient method online optimization and dual averaging method minimizing finite average Stochastic and online optimization 6 1 Stochastic optimization problem

More information

Selected Topics in Optimization. Some slides borrowed from

Selected Topics in Optimization. Some slides borrowed from Selected Topics in Optimization Some slides borrowed from http://www.stat.cmu.edu/~ryantibs/convexopt/ Overview Optimization problems are almost everywhere in statistics and machine learning. Input Model

More information

ARock: an algorithmic framework for asynchronous parallel coordinate updates

ARock: an algorithmic framework for asynchronous parallel coordinate updates ARock: an algorithmic framework for asynchronous parallel coordinate updates Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin ( UCLA Math, U.Waterloo DCO) UCLA CAM Report 15-37 ShanghaiTech SSDS 15 June 25,

More information

ORIE 6326: Convex Optimization. Quasi-Newton Methods

ORIE 6326: Convex Optimization. Quasi-Newton Methods ORIE 6326: Convex Optimization Quasi-Newton Methods Professor Udell Operations Research and Information Engineering Cornell April 10, 2017 Slides on steepest descent and analysis of Newton s method adapted

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725

Quasi-Newton Methods. Javier Peña Convex Optimization /36-725 Quasi-Newton Methods Javier Peña Convex Optimization 10-725/36-725 Last time: primal-dual interior-point methods Consider the problem min x subject to f(x) Ax = b h(x) 0 Assume f, h 1,..., h m are convex

More information

Accelerating SVRG via second-order information

Accelerating SVRG via second-order information Accelerating via second-order information Ritesh Kolte Department of Electrical Engineering rkolte@stanford.edu Murat Erdogdu Department of Statistics erdogdu@stanford.edu Ayfer Özgür Department of Electrical

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION

I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION I P IANO : I NERTIAL P ROXIMAL A LGORITHM FOR N ON -C ONVEX O PTIMIZATION Peter Ochs University of Freiburg Germany 17.01.2017 joint work with: Thomas Brox and Thomas Pock c 2017 Peter Ochs ipiano c 1

More information

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison

Optimization. Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison Optimization Benjamin Recht University of California, Berkeley Stephen Wright University of Wisconsin-Madison optimization () cost constraints might be too much to cover in 3 hours optimization (for big

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems

Introduction. New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems New Nonsmooth Trust Region Method for Unconstraint Locally Lipschitz Optimization Problems Z. Akbari 1, R. Yousefpour 2, M. R. Peyghami 3 1 Department of Mathematics, K.N. Toosi University of Technology,

More information

LARGE-SCALE NONCONVEX STOCHASTIC OPTIMIZATION BY DOUBLY STOCHASTIC SUCCESSIVE CONVEX APPROXIMATION

LARGE-SCALE NONCONVEX STOCHASTIC OPTIMIZATION BY DOUBLY STOCHASTIC SUCCESSIVE CONVEX APPROXIMATION LARGE-SCALE NONCONVEX STOCHASTIC OPTIMIZATION BY DOUBLY STOCHASTIC SUCCESSIVE CONVEX APPROXIMATION Aryan Mokhtari, Alec Koppel, Gesualdo Scutari, and Alejandro Ribeiro Department of Electrical and Systems

More information

Large-scale Stochastic Optimization

Large-scale Stochastic Optimization Large-scale Stochastic Optimization 11-741/641/441 (Spring 2016) Hanxiao Liu hanxiaol@cs.cmu.edu March 24, 2016 1 / 22 Outline 1. Gradient Descent (GD) 2. Stochastic Gradient Descent (SGD) Formulation

More information

Selected Methods for Modern Optimization in Data Analysis Department of Statistics and Operations Research UNC-Chapel Hill Fall 2018

Selected Methods for Modern Optimization in Data Analysis Department of Statistics and Operations Research UNC-Chapel Hill Fall 2018 Selected Methods for Modern Optimization in Data Analysis Department of Statistics and Operations Research UNC-Chapel Hill Fall 08 Instructor: Quoc Tran-Dinh Scriber: Quoc Tran-Dinh Lecture 4: Selected

More information

Subgradient Method. Guest Lecturer: Fatma Kilinc-Karzan. Instructors: Pradeep Ravikumar, Aarti Singh Convex Optimization /36-725

Subgradient Method. Guest Lecturer: Fatma Kilinc-Karzan. Instructors: Pradeep Ravikumar, Aarti Singh Convex Optimization /36-725 Subgradient Method Guest Lecturer: Fatma Kilinc-Karzan Instructors: Pradeep Ravikumar, Aarti Singh Convex Optimization 10-725/36-725 Adapted from slides from Ryan Tibshirani Consider the problem Recall:

More information

Sparse Gaussian conditional random fields

Sparse Gaussian conditional random fields Sparse Gaussian conditional random fields Matt Wytock, J. ico Kolter School of Computer Science Carnegie Mellon University Pittsburgh, PA 53 {mwytock, zkolter}@cs.cmu.edu Abstract We propose sparse Gaussian

More information

Sparse inverse covariance estimation with the lasso

Sparse inverse covariance estimation with the lasso Sparse inverse covariance estimation with the lasso Jerome Friedman Trevor Hastie and Robert Tibshirani November 8, 2007 Abstract We consider the problem of estimating sparse graphs by a lasso penalty

More information