Gradient descent. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Similar documents
Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Descent methods. min x. f(x)

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: September 15

Lecture 5: September 12

Gradient Descent. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Selected Topics in Optimization. Some slides borrowed from

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Optimization methods

Newton s Method. Javier Peña Convex Optimization /36-725

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

Lecture 6: September 12

Subgradient Method. Ryan Tibshirani Convex Optimization

Optimization methods

Unconstrained minimization of smooth functions

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Stochastic Gradient Descent. Ryan Tibshirani Convex Optimization

6. Proximal gradient method

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Lecture 8: February 9

Subgradient Method. Guest Lecturer: Fatma Kilinc-Karzan. Instructors: Pradeep Ravikumar, Aarti Singh Convex Optimization /36-725

Lecture 9: September 28

10. Unconstrained minimization

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

5. Subgradient method

Conditional Gradient (Frank-Wolfe) Method

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Lecture 14: Newton s Method

Lasso: Algorithms and Extensions

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Lecture 6: September 17

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Lecture 17: October 27

Boosting. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL , 10.7, 10.13

Unconstrained minimization

6. Proximal gradient method

Lecture 14: October 17

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Convex Optimization. Problem set 2. Due Monday April 26th

Least Angle Regression, Forward Stagewise and the Lasso

Lecture 23: November 21

Dual Ascent. Ryan Tibshirani Convex Optimization

Convex Optimization / Homework 1, due September 19

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

Machine Learning Linear Models

Coordinate Descent and Ascent Methods

Nonlinear Optimization for Optimal Control

Convex Optimization CMU-10725

Lecture 7: September 17

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

MATH 829: Introduction to Data Mining and Analysis Computing the lasso solution

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Convex Optimization Lecture 16

Optimization Tutorial 1. Basic Gradient Descent

Fast proximal gradient methods

Math 273a: Optimization Subgradient Methods

Regularization Paths

Optimization for Machine Learning

CPSC 540: Machine Learning

Agenda. Fast proximal gradient methods. 1 Accelerated first-order methods. 2 Auxiliary sequences. 3 Convergence analysis. 4 Numerical examples

CPSC 540: Machine Learning

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Lecture 25: November 27

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

Gradient Descent. Stephan Robert

Lecture 23: Conditional Gradient Method

Convex Optimization CMU-10725

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

ECS171: Machine Learning

Big Data Analytics. Lucas Rego Drumond

Gradient Descent. Model

Homework 4. Convex Optimization /36-725

Lecture 1: September 25

Lecture 14 Ellipsoid method

Math 273a: Optimization Subgradients of convex functions

Pathwise coordinate optimization

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

High-dimensional regression

Regularization Paths. Theme

An Iterative Descent Method

Support Vector Machines: Training with Stochastic Gradient Descent. Machine Learning Fall 2017

Analysis of Greedy Algorithms

Unconstrained minimization: assumptions

Machine Learning for OR & FE

The Frank-Wolfe Algorithm:

Contents. 1 Introduction. 1.1 History of Optimization ALG-ML SEMINAR LISSA: LINEAR TIME SECOND-ORDER STOCHASTIC ALGORITHM FEBRUARY 23, 2016

ORIE 4741: Learning with Big Messy Data. Proximal Gradient Method

ORIE 6326: Convex Optimization. Quasi-Newton Methods

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Convex functions, subdifferentials, and the L.a.s.s.o.

Boosted Lasso. Abstract

Linear Convergence under the Polyak-Łojasiewicz Inequality

Transcription:

Gradient descent Barnabas Poczos & Ryan Tibshirani Convex Optimization 10-725/36-725 1

Gradient descent First consider unconstrained minimization of f : R n R, convex and differentiable. We want to solve min f(x), x R n i.e., find x such that f(x ) = min x f(x) Gradient descent: choose initial x (0) R n, repeat: x (k) = x (k 1) t k f(x (k 1) ), k = 1, 2, 3,... Stop at some point 2

3

4

Interpretation At each iteration, consider the expansion f(y) f(x) + f(x) T (y x) + 1 2t y x 2 2 Quadratic approximation, replacing usual 2 f(x) by 1 t I f(x) + f(x) T (y x) 1 2t y x 2 2 linear approximation to f proximity term to x, with weight 1/(2t) Choose next point y = x + to minimize quadratic approximation: x + = x t f(x) 5

Blue point is x, red point is x + = argmin y R n f(x) + f(x) T (y x) + y x 2 2 /(2t) 6

Outline Today: How to choose step size t k Convergence under Lipschitz gradient Convergence under strong convexity Forward stagewise regression, boosting 7

Fixed step size Simply take t k = t for all k = 1, 2, 3,..., can diverge if t is too big. Consider f(x) = (10x 2 1 + x2 2 )/2, gradient descent after 8 steps: 20 10 0 10 20 * 20 10 0 10 20 8

Can be slow if t is too small. Same example, gradient descent after 100 steps: 20 10 0 10 20 * 20 10 0 10 20 9

10 Same example, gradient descent after 40 appropriately sized steps: 20 10 0 10 20 * 20 10 0 10 20 This porridge is too hot! too cold! juuussst right. Convergence analysis later will give us a better idea

11 Backtracking line search One way to adaptively choose the step size is to use backtracking line search: First fix parameters 0 < β < 1 and 0 < α 1/2 Then at each iteration, start with t = 1, and while update t = βt f(x t f(x)) > f(x) αt f(x) 2 2, Simple and tends to work pretty well in practice

12 Interpretation For us x = f(x) (From B & V page 465)

13 Backtracking picks up roughly the right step size (13 steps): 20 10 0 10 20 * 20 10 0 10 20 Here β = 0.8 (B & V recommend β (0.1, 0.8))

14 Exact line search Could also choose step to do the best we can along the direction of the negative gradient, called exact line search: t = argmin s 0 f(x s f(x)) Usually not possible to do this minimization exactly Approximations to exact line search are often not much more efficient than backtracking, and it s usually not worth it

15 Convergence analysis Assume that f : R n R is convex and differentiable, and additionally f(x) f(y) 2 L x y 2 for any x, y I.e., f is Lipschitz continuous with constant L > 0 Theorem: Gradient descent with fixed step size t 1/L satisfies f(x (k) ) f(x ) x(0) x 2 2 2tk I.e., gradient descent has convergence rate O(1/k) I.e., to get f(x (k) ) f(x ) ɛ, need O(1/ɛ) iterations

16 Key steps: Proof f Lipschitz with constant L f(y) f(x) + f(x) T (y x) + L 2 y x 2 2 all x, y Plugging in y = x + = x t f(x), f(x + ) f(x) (1 Lt 2 )t f(x) 2 2 Taking 0 < t 1/L, and using convexity of f, f(x + ) f(x ) + f(x) T (x x ) t 2 f(x) 2 2 = f(x ) + 2t( 1 x x 2 2 x + x 2 ) 2

17 Summing over iterations: k (f(x (i) ) f(x )) 2t( 1 x (0) x 2 2 x (k) x 2 ) 2 i=1 1 2t x(0) x 2 2 Since f(x (k) ) is nonincreasing, f(x (k) ) f(x ) 1 k k i=1 ( f(x (i) ) f(x ) ) x(0) x 2 2 2tk

18 Convergence analysis for backtracking Same assumptions, f : R n R is convex and differentiable, and f is Lipschitz continuous with constant L > 0 Same rate for a step size chosen by backtracking search Theorem: Gradient descent with backtracking line search satisfies where t min = min{1, β/l} f(x (k) ) f(x ) x(0) x 2 2 2t min k If β is not too small, then we don t lose much compared to fixed step size (β/l vs 1/L)

19 Strong convexity Strong convexity of f means for some d > 0, 2 f(x) di for any x Sharper lower bound than that from usual convexity: f(y) f(x) + f(x) T (y x) + d 2 y x 2 2 all x, y Under Lipschitz assumption as before, and also strong convexity: Theorem: Gradient descent with fixed step size t 2/(d + L) or with backtracking line search search satisfies where 0 < c < 1 f(x (k) ) f(x ) c k L 2 x(0) x 2 2

20 I.e., rate with strong convexity is O(c k ), exponentially fast! I.e., to get f(x (k) ) f(x ) ɛ, need O(log(1/ɛ)) iterations Called linear convergence, because looks linear on a semi-log plot: (From B & V page 487) Constant c depends adversely on condition number L/d (higher condition number slower rate)

21 Lipschitz continuity of f: This means 2 f(x) LI A look at the conditions E.g., consider f(β) = 1 2 y Xβ 2 2 (linear regression). Here 2 f(β) = X T X, so f is Lipschitz with L = σmax(x) 2 Strong convexity of f: Recall this is 2 f(x) di E.g., consider f(β) = 1 2 y Xβ 2 2, with 2 f(β) = X T X. Now we need d = σ 2 min (X) If X is wide i.e., X is n p with p > n then σ min (X) = 0, and f can t be strongly convex Even if σ min (X) > 0, can have a very large condition number L/d = σ max (X)/σ min (X)

22 A function f having Lipschitz gradient and being strongly convex can be summarized as: for constants L > d > 0 di 2 f(x) LI for all x R n, Think of f being sandwiched between two quadratics This may seem like a strong condition to hold globally, over all x R n. But a careful looks at the proofs shows we actually only need to have Lipschitz gradient and/or strong convexity over the sublevel set S = {x : f(x) f(x (0) )} This is less restrictive

23 Practicalities Stopping rule: stop when f(x) 2 is small Recall f(x ) = 0 If f is strongly convex with parameter d, then f(x) 2 2dɛ f(x) f(x ) ɛ Pros and cons of gradient descent: Pro: simple idea, and each iteration is cheap Pro: Very fast for well-conditioned, strongly convex problems Con: Often slow, because interesting problems aren t strongly convex or well-conditioned Con: can t handle nondifferentiable functions

24 Forward stagewise regression Let s stick with f(β) = 1 2 y Xβ 2 2, linear regression setting X is n p, its columns X 1,... X p are predictor variables Forward stagewise regression: start with β (0) = 0, repeat: Find variable i such that Xi T r is largest, where r = y Xβ (k 1) (largest absolute correlation with residual) Update β (k) i = β (k 1) i + γ sign(x T i r) Here γ > 0 is small and fixed, called learning rate This looks kind of like gradient descent

25 Steepest descent Close cousin to gradient descent, just change the choice of norm. Let p, q be complementary (dual): 1/p + 1/q = 1 Steepest descent updates are x + = x + t x, where x = f(x) q u u = argmin v p 1 f(x) T v If p = 2, then x = f(x), gradient descent If p = 1, then x = f(x)/ x i e i, where f (x) x i = max f j=1,...n (x) x j = f(x) Normalized steepest descent just takes x = u (unit q-norm)

26 Equivalence Normalized steepest descent with respect to l 1 norm: updates are x + i ( f ) = x i t sign (x) x i where i is the largest component of f(x) in absolute value Compare forward stagewise: updates are β + i = β i + γ sign(x T i r), r = y Xβ Recall here f(β) = 1 2 y Xβ 2 2, so f(β) = XT (y Xβ) and f(β)/ β i = Xi T (y Xβ) Forward stagewise regression is exactly normalized steepest descent under l 1 norm (with fixed step size t = γ)

Early stopping and sparse approximation If we run forward stagewise to completion, then we know that we will minimize the least squares criterion f(β) = y Xβ 2 2, i.e., we will get a least squares solution What happens if we stop early? May seem strange from an optimization perspective (we would be under-optimizing )... Interesting from a statistical perspective, because stopping early gives us a sparse approximation to the least squares solution Well-known sparse regression estimator, the lasso: 1 min x R p 2 y Xβ 2 2 subject to β 1 s How do lasso solutions and forward stagewise estimates compare? 27

28 Left side is lasso solution ˆβ(s) over bound s, right side is forward stagewise estimate over iterations k: (From ESL page 609) For some problems, they are exactly the same (as γ 0)

29 Gradient boosting Given observations y = (y 1,... y n ) R n, predictor measurements x i R p, i = 1,... n Want to construct a flexible (nonlinear) model for outcome based on predictors. Weighted sum of trees: m ŷ i = β j T j (x i ), j=1 i = 1,... n Each tree T j inputs predictor measurements x i, outputs prediction. Trees are grown typically pretty short...

30 Pick a loss function L that reflects setting; e.g., for continuous y, could take L(y i, ŷ i ) = (y i ŷ i ) 2 Want to solve min β R M n i=1 ( L y i, M j=1 ) β j T j (x i ) Indexes all trees of a fixed size (e.g., depth = 5), so M is huge Space is simply too big to optimize Gradient boosting: basically a version of gradient descent that s forced to work with trees First think of minimization as minŷ f(ŷ), function of predictions ŷ

31 Start with initial model, e.g., fit a single tree ŷ (0) = T 0. Repeat: Evaluate gradient g at latest prediction ŷ (k 1), [ ] L(yi, ŷ i ) g i = ŷ i ŷ i =ŷ (k 1) i, i = 1,... n Find a tree T k that is close to g, i.e., T k solves min trees T n ( g i T (x i )) 2 i=1 Not hard to (approximately) solve for a single tree Update our prediction: ŷ (k) = ŷ (k 1) + α k T k Note: predictions are weighted sums of trees, as desired

32 Can we do better? Recall O(1/k) rate for gradient descent over problem class of convex, differentiable functions with Lipschitz continuous gradients First-order method: iterative method, updates x (k) in x (0) + span{ f(x (0) ), f(x (1) ),... f(x (k 1) )} Theorem (Nesterov): For any k (n 1)/2 and any starting point x (0), there is a function f in the problem class such that any first-order method satisfies f(x (k) ) f(x ) 3L x(0) x 2 2 32(k + 1) 2 Can we achieve a rate O(1/k 2 )? Answer: yes, and more!

33 References S. Boyd and L. Vandenberghe (2004), Convex optimization, Chapter 9 T. Hastie, R. Tibshirani and J. Friedman (2009), The elements of statistical learning, Chapters 10 and 16 Y. Nesterov (2004), Introductory lectures on convex optimization: a basic course, Chapter 2 L. Vandenberghe, Lecture Notes for EE 236C, UCLA, Spring 2011-2012