Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Similar documents
Gradient descent. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Descent methods. min x. f(x)

Lecture 5: September 15

Lecture 5: Gradient Descent. 5.1 Unconstrained minimization problems and Gradient descent

Lecture 5: September 12

Gradient Descent. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Selected Topics in Optimization. Some slides borrowed from

Proximal Gradient Descent and Acceleration. Ryan Tibshirani Convex Optimization /36-725

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Javier Peña Convex Optimization /36-725

Subgradient Method. Ryan Tibshirani Convex Optimization

Lecture 6: September 12

Optimization methods

Unconstrained minimization

10. Unconstrained minimization

Proximal Newton Method. Zico Kolter (notes by Ryan Tibshirani) Convex Optimization

Unconstrained minimization of smooth functions

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Lecture 9: September 28

Lecture 8: February 9

Subgradient Method. Guest Lecturer: Fatma Kilinc-Karzan. Instructors: Pradeep Ravikumar, Aarti Singh Convex Optimization /36-725

6. Proximal gradient method

Frank-Wolfe Method. Ryan Tibshirani Convex Optimization

Conditional Gradient (Frank-Wolfe) Method

Lecture 14: October 17

Lecture 14: Newton s Method

Optimization methods

Stochastic Gradient Descent. Ryan Tibshirani Convex Optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Lecture 17: October 27

Convex Optimization Lecture 16

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Lecture 6: September 17

5. Subgradient method

Dual methods and ADMM. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Coordinate descent. Geoff Gordon & Ryan Tibshirani Optimization /

6. Proximal gradient method

Lasso: Algorithms and Extensions

min f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;

Uses of duality. Geoff Gordon & Ryan Tibshirani Optimization /

Convex Optimization. Problem set 2. Due Monday April 26th

Fast proximal gradient methods

Optimization Tutorial 1. Basic Gradient Descent

Lecture 23: November 21

MATH 829: Introduction to Data Mining and Analysis Computing the lasso solution

Math 273a: Optimization Subgradients of convex functions

Convex Optimization / Homework 1, due September 19

Math 273a: Optimization Subgradient Methods

Convex Optimization and l 1 -minimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

A Brief Review on Convex Optimization

Dual Methods. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Lecture 7: September 17

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

Nonlinear Optimization for Optimal Control

Lecture 25: November 27

Homework 4. Convex Optimization /36-725

Coordinate Update Algorithm Short Course Subgradients and Subgradient Methods

Agenda. Fast proximal gradient methods. 1 Accelerated first-order methods. 2 Auxiliary sequences. 3 Convergence analysis. 4 Numerical examples

Master 2 MathBigData. 3 novembre CMAP - Ecole Polytechnique

Dual Ascent. Ryan Tibshirani Convex Optimization

Boosting. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL , 10.7, 10.13

Lecture 1: September 25

Coordinate Descent and Ascent Methods

Machine Learning Linear Models

Convex Optimization. 9. Unconstrained minimization. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University

12. Interior-point methods

CPSC 540: Machine Learning

Barrier Method. Javier Peña Convex Optimization /36-725

Line Search Methods for Unconstrained Optimisation

ORIE 6326: Convex Optimization. Quasi-Newton Methods

Lecture 23: Conditional Gradient Method

Least Angle Regression, Forward Stagewise and the Lasso

Lecture 9 Sequential unconstrained minimization

Optimization for Machine Learning

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

CPSC 540: Machine Learning

10-725/36-725: Convex Optimization Spring Lecture 21: April 6

Unconstrained minimization: assumptions

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Homework 5. Convex Optimization /36-725

Convex Optimization. (EE227A: UC Berkeley) Lecture 15. Suvrit Sra. (Gradient methods III) 12 March, 2013

Lecture 1: January 12

Lecture 14 Ellipsoid method

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

Adaptive Restarting for First Order Optimization Methods

ICS-E4030 Kernel Methods in Machine Learning

Contents. 1 Introduction. 1.1 History of Optimization ALG-ML SEMINAR LISSA: LINEAR TIME SECOND-ORDER STOCHASTIC ALGORITHM FEBRUARY 23, 2016

Lecture: Smoothing.

12. Interior-point methods

Quasi-Newton Methods. Zico Kolter (notes by Ryan Tibshirani, Javier Peña, Zico Kolter) Convex Optimization

Constrained Optimization and Lagrangian Duality

Convex Optimization CMU-10725

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Transcription:

Gradient Descent Ryan Tibshirani Convex Optimization 10-725/36-725

Last time: canonical convex programs Linear program (LP): takes the form min x subject to c T x Gx h Ax = b Quadratic program (QP): like an LP, but with a quadratic criterion Semidefinite program (SDP): like an LP, but with matrices Conic program: the most general form of all 2

Gradient descent Consider unconstrained, smooth convex optimization min x f(x) i.e., f is convex and differentiable with dom(f) = R n. Denote the optimal criterion value by f = min x f(x), and a solution by x Gradient descent: choose initial x (0) R n, repeat: x (k) = x (k 1) t k f(x (k 1) ), k = 1, 2, 3,... Stop at some point 3

4

5

Gradient descent interpretation At each iteration, consider the expansion f(y) f(x) + f(x) T (y x) + 1 2t y x 2 2 Quadratic approximation, replacing usual 2 f(x) by 1 t I f(x) + f(x) T (y x) 1 2t y x 2 2 linear approximation to f proximity term to x, with weight 1/(2t) Choose next point y = x + to minimize quadratic approximation: x + = x t f(x) 6

x + = argmin y Blue point is x, red point is f(x) + f(x) T (y x) + 1 2t y x 2 2 7

Outline Today: How to choose step sizes Convergence analysis Forward stagewise regression Gradient boosting 8

Fixed step size Simply take t k = t for all k = 1, 2, 3,..., can diverge if t is too big. Consider f(x) = (10x 2 1 + x2 2 )/2, gradient descent after 8 steps: 20 10 0 10 20 * 20 10 0 10 20 9

10 Can be slow if t is too small. Same example, gradient descent after 100 steps: 20 10 0 10 20 * 20 10 0 10 20

11 Same example, gradient descent after 40 appropriately sized steps: 20 10 0 10 20 * 20 10 0 10 20 Clearly there s a tradeoff convergence analysis later will give us a better idea

12 Backtracking line search One way to adaptively choose the step size is to use backtracking line search: First fix parameters 0 < β < 1 and 0 < α 1/2 At each iteration, start with t = 1, and while f(x t f(x)) > f(x) αt f(x) 2 2 shrink t = βt. Else perform gradient descent update x + = x t f(x) Simple and tends to work well in practice (further simplification: just take α = 1/2)

13 Backtracking interpretation f(x + t x) f(x)+t f(x) T x t =0 t 0 f(x)+αt f(x) T x t For us x = f(x)

14 Backtracking picks up roughly the right step size (12 outer steps, 40 steps total): 20 10 0 10 20 * 20 10 0 10 20 Here α = β = 0.5

15 Exact line search Could also choose step to do the best we can along direction of negative gradient, called exact line search: t = argmin s 0 f(x s f(x)) Usually not possible to do this minimization exactly Approximations to exact line search are often not much more efficient than backtracking, and it s usually not worth it

16 Convergence analysis Assume that f convex and differentiable, with dom(f) = R n, and additionally f(x) f(y) 2 L x y 2 for any x, y I.e., f is Lipschitz continuous with constant L > 0 Theorem: Gradient descent with fixed step size t 1/L satisfies f(x (k) ) f x(0) x 2 2 2tk We say gradient descent has convergence rate O(1/k) I.e., to get f(x (k) ) f ɛ, we need O(1/ɛ) iterations

17 Key steps: Proof f Lipschitz with constant L f(y) f(x) + f(x) T (y x) + L 2 y x 2 2 all x, y Plugging in y = x + = x t f(x), ( f(x + ) f(x) 1 Lt 2 ) t f(x) 2 2 Taking 0 < t 1/L, and using convexity of f, f(x + ) f + f(x) T (x x ) t 2 f(x) 2 2 = f + 2t( 1 x x 2 2 x + x 2 ) 2

18 Summing over iterations: k (f(x (i) ) f ) 2t( 1 x (0) x 2 2 x (k) x 2 ) 2 i=1 1 2t x(0) x 2 2 Since f(x (k) ) is nonincreasing, f(x (k) ) f 1 k k i=1 ( f(x (i) ) f ) x(0) x 2 2 2tk

19 Convergence analysis for backtracking Same assumptions, f is convex and differentiable, dom(f) = R n, and f is Lipschitz continuous with constant L > 0 Same rate for a step size chosen by backtracking search Theorem: Gradient descent with backtracking line search satisfies where t min = min{1, β/l} f(x (k) ) f x(0) x 2 2 2t min k If β is not too small, then we don t lose much compared to fixed step size (β/l vs 1/L)

20 Convergence analysis under strong convexity Reminder: strong convexity of f means f(x) 2 2 x 2 2 is convex for some m > 0. If f is twice differentiable, then this implies 2 f(x) mi for any x Sharper lower bound than that from usual convexity: f(y) f(x) + f(x) T (y x) + m 2 y x 2 2 all x, y Under Lipschitz assumption as before, and also strong convexity: Theorem: Gradient descent with fixed step size t 2/(m + L) or with backtracking line search search satisfies where 0 < c < 1 f(x (k) ) f c k L 2 x(0) x 2 2

21 I.e., rate with strong convexity is O(c k ), exponentially fast! I.e., to get f(x (k) ) f ɛ, need O(log(1/ɛ)) iterations Called linear convergence, because looks linear on a semi-log plot: 10 4 10 2 f(x (k) ) p 10 0 exact l.s. 10 2 backtracking l.s. 10 4 0 50 100 150 200 k (From B & V page 487) Constant c depends adversely on condition number L/m (higher condition number slower rate)

22 A look at the conditions A look at the conditions for a simple problem, f(β) = 1 2 y Xβ 2 2 Lipschitz continuity of f: This means 2 f(x) LI As 2 f(β) = X T X, we have L = σ 2 max(x) Strong convexity of f: This means 2 f(x) mi As 2 f(β) = X T X, we have m = σ 2 min (X) If X is wide i.e., X is n p with p > n then σ min (X) = 0, and f can t be strongly convex Even if σ min (X) > 0, can have a very large condition number L/m = σ max (X)/σ min (X)

23 A function f having Lipschitz gradient and being strongly convex satisfies: mi 2 f(x) LI for all x R n, for constants L > m > 0 Think of f being sandwiched between two quadratics May seem like a strong condition to hold globally (for all x R n ). But a careful look at the proofs shows that we only need Lipschitz gradients/strong convexity over the sublevel set This is less restrictive S = {x : f(x) f(x (0) )}

24 Practicalities Stopping rule: stop when f(x) 2 is small Recall f(x ) = 0 at solution x If f is strongly convex with parameter m, then f(x) 2 2mɛ f(x) f ɛ Pros and cons of gradient descent: Pro: simple idea, and each iteration is cheap Pro: very fast for well-conditioned, strongly convex problems Con: often slow, because interesting problems aren t strongly convex or well-conditioned Con: can t handle nondifferentiable functions

25 Forward stagewise regression Let s stick with f(β) = 1 2 y Xβ 2 2, linear regression setting X is n p, its columns X 1,... X p are predictor variables Forward stagewise regression: start with β (0) = 0, repeat: Find variable i s.t. Xi T r is largest, where r = y Xβ(k 1) (largest absolute correlation with residual) Update β (k) i = β (k 1) i + γ sign(x T i r) Here γ > 0 is small and fixed, called learning rate This looks kind of like gradient descent

26 Steepest descent Close cousin to gradient descent, just change the choice of norm. Let p, q be complementary (dual): 1/p + 1/q = 1 Steepest descent updates are x + = x + t x, where x = f(x) q u u = argmin v p 1 f(x) T v If p = 2, then x = f(x), gradient descent If p = 1, then x = f(x)/ x i e i, where f (x) x i = max f j=1,...n (x) x j = f(x) Normalized steepest descent just takes x = u (unit q-norm)

27 An interesting equivalence Normalized steepest descent with respect to l 1 norm: updates are x + i ( f ) = x i t sign (x) x i where i is the largest component of f(x) in absolute value Compare forward stagewise: updates are β + i = β i + γ sign(x T i r), r = y Xβ But here f(β) = 1 2 y Xβ 2 2, so f(β) = XT (y Xβ) and f(β)/ β i = Xi T (y Xβ) Hence forward stagewise regression is normalized steepest descent under l 1 norm (with fixed step size t = γ)

28 Early stopping and sparse approximation If we run forward stagewise to completion, then we will minimize f(β) = y Xβ 2 2, i.e., we will produce a least squares solution What happens if we stop early? May seem strange from an optimization perspective (we are under-optimizing )... Interesting from a statistical perspective, because stopping early gives us a sparse approximation to the least squares solution Well-known sparse regression estimator, the lasso: 1 min β R p 2 y Xβ 2 2 subject to β 1 s How do lasso solutions and forward stagewise estimates compare?

29 Stagewise path Lasso path Coordinates 0.2 0.0 0.2 0.4 0.6 Coordinates 0.2 0.0 0.2 0.4 0.6 0.0 0.5 1.0 1.5 2.0 β (k) 1 0.0 0.5 1.0 1.5 2.0 ˆβ(s) 1 For some problems (some y, X), they are exactly the same as the learning rate γ 0!

30 Gradient boosting Given observations y = (y 1,... y n ) R n, predictor measurements x i R p, i = 1,... n Want to construct a flexible (nonlinear) model for outcome based on predictors. Weighted sum of trees: m θ i = β j T j (x i ), j=1 i = 1,... n Each tree T j inputs predictor measurements x i, outputs prediction. Trees are grown typically pretty short...

31 Pick a loss function L that reflects setting; e.g., for continuous y, could take L(y i, θ i ) = (y i θ i ) 2 Want to solve min β R M n i=1 ( L y i, M j=1 ) β j T j (x i ) Indexes all trees of a fixed size (e.g., depth = 5), so M is huge Space is simply too big to optimize Gradient boosting: basically a version of gradient descent that is forced to work with trees First think of optimization as min θ f(θ), over predicted values θ (subject to θ coming from trees)

32 Start with initial model, e.g., fit a single tree θ (0) = T 0. Repeat: Evaluate gradient g at latest prediction θ (k 1), [ ] L(yi, θ i ) g i = θ i θ i =θ (k 1) i, i = 1,... n Find a tree T k that is close to g, i.e., T k solves min trees T n ( g i T (x i )) 2 i=1 Not hard to (approximately) solve for a single tree Update our prediction: θ (k) = θ (k 1) + α k T k Note: predictions are weighted sums of trees, as desired

33 Can we do better? Recall O(1/ɛ) rate for gradient descent over problem class of convex, differentiable functions with Lipschitz continuous gradients First-order method: iterative method, updates x (k) in x (0) + span{ f(x (0) ), f(x (1) ),... f(x (k 1) )} Theorem (Nesterov): For any k (n 1)/2 and any starting point x (0), there is a function f in the problem class such that any first-order method satisfies f(x (k) ) f 3L x(0) x 2 2 32(k + 1) 2 Can attain rate O(1/k 2 ), or O(1/ ɛ)? Answer: yes (and more)!

34 References and further reading S. Boyd and L. Vandenberghe (2004), Convex optimization, Chapter 9 T. Hastie, R. Tibshirani and J. Friedman (2009), The elements of statistical learning, Chapters 10 and 16 Y. Nesterov (1998), Introductory lectures on convex optimization: a basic course, Chapter 2 R. J. Tibshirani (2014), A general framework for fast stagewise algorithms L. Vandenberghe, Lecture notes for EE 236C, UCLA, Spring 2011-2012