Boosted Lasso and Reverse Boosting

Similar documents
Boosted Lasso. Peng Zhao. Bin Yu. 1. Introduction

Boosted Lasso. Abstract

LEAST ANGLE REGRESSION 469

Regularization Paths

Regularization Paths. Theme

Boosting Methods: Why They Can Be Useful for High-Dimensional Data

Boosting. March 30, 2009

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

Margin Maximizing Loss Functions

Machine Learning for OR & FE

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

Support Vector Machine, Random Forests, Boosting Based in part on slides from textbook, slides of Susan Holmes. December 2, 2012

Part I Week 7 Based in part on slides from textbook, slides of Susan Holmes

Self Supervised Boosting

Boosting as a Regularized Path to a Maximum Margin Classifier

Sparse representation classification and positive L1 minimization

Sparse Approximation and Variable Selection

Non-linear Supervised High Frequency Trading Strategies with Applications in US Equity Markets

Nonlinear Support Vector Machines through Iterative Majorization and I-Splines

Is the test error unbiased for these programs? 2017 Kevin Jamieson

MS-C1620 Statistical inference

Gradient Boosting (Continued)

A Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University

Variable Selection in Data Mining Project

Gradient descent. Barnabas Poczos & Ryan Tibshirani Convex Optimization /36-725

Learning theory. Ensemble methods. Boosting. Boosting: history

Boosting. Ryan Tibshirani Data Mining: / April Optional reading: ISL 8.2, ESL , 10.7, 10.13

ISyE 691 Data mining and analytics

Conjugate direction boosting

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69

A Survey of L 1. Regression. Céline Cunen, 20/10/2014. Vidaurre, Bielza and Larranaga (2013)

Why does boosting work from a statistical view

Sparse Penalized Forward Selection for Support Vector Classification

9 Classification. 9.1 Linear Classifiers

Frank C Porter and Ilya Narsky: Statistical Analysis Techniques in Particle Physics Chap. c /9/9 page 331 le-tex

Proteomics and Variable Selection

Least Angle Regression, Forward Stagewise and the Lasso

LASSO Review, Fused LASSO, Parallel LASSO Solvers

A NEW PERSPECTIVE ON BOOSTING IN LINEAR REGRESSION VIA SUBGRADIENT OPTIMIZATION AND RELATIVES

Linear Models in Machine Learning

Recitation 9. Gradient Boosting. Brett Bernstein. March 30, CDS at NYU. Brett Bernstein (CDS at NYU) Recitation 9 March 30, / 14

Selected Topics in Optimization. Some slides borrowed from

Big Data Analytics. Special Topics for Computer Science CSE CSE Feb 24

Sparsity in Underdetermined Systems

DATA MINING AND MACHINE LEARNING

Exploratory quantile regression with many covariates: An application to adverse birth outcomes

Biostatistics Advanced Methods in Biostatistics IV

Gradient Descent. Ryan Tibshirani Convex Optimization /36-725

Machine Learning CSE546 Carlos Guestrin University of Washington. October 7, Efficiency: If size(w) = 100B, each prediction is expensive:

10-701/ Machine Learning - Midterm Exam, Fall 2010

Ordinal Classification with Decision Rules

Comments. Assignment 3 code released. Thought questions 3 due this week. Mini-project: hopefully you have started. implement classification algorithms

Regularization Path Algorithms for Detecting Gene Interactions

Logistic Regression and Boosting for Labeled Bags of Instances

Incremental Training of a Two Layer Neural Network

ECE 5424: Introduction to Machine Learning

ABC-Boost: Adaptive Base Class Boost for Multi-class Classification

AdaBoost. Lecturer: Authors: Center for Machine Perception Czech Technical University, Prague

CS229 Supplemental Lecture notes

Statistical Methods for Data Mining

An Improved 1-norm SVM for Simultaneous Classification and Variable Selection

18.9 SUPPORT VECTOR MACHINES

TECHNICAL REPORT NO. 1091r. A Note on the Lasso and Related Procedures in Model Selection

Lecture 17: October 27

Neural Networks and Deep Learning

Proximal Newton Method. Ryan Tibshirani Convex Optimization /36-725

Saharon Rosset 1 and Ji Zhu 2

JEROME H. FRIEDMAN Department of Statistics and Stanford Linear Accelerator Center, Stanford University, Stanford, CA

ECE G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

Generalized Boosted Models: A guide to the gbm package

SVAN 2016 Mini Course: Stochastic Convex Optimization Methods in Machine Learning

Sparse Linear Models (10/7/13)

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Iterative Selection Using Orthogonal Regression Techniques

Ensemble Methods: Jay Hyer

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Lecture 2 Machine Learning Review

Computational statistics

> DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE GRAVIS 2016 BASEL. Logistic Regression. Pattern Recognition 2016 Sandro Schönborn University of Basel

6.036 midterm review. Wednesday, March 18, 15

STAT 535 Lecture 5 November, 2018 Brief overview of Model Selection and Regularization c Marina Meilă

Analysis of Greedy Algorithms

arxiv: v1 [math.st] 2 May 2007

Regularization: Ridge Regression and the LASSO

Linear & nonlinear classifiers

A Brief Introduction to Adaboost

Pathwise coordinate optimization

Linear model selection and regularization

arxiv: v2 [stat.ml] 22 Feb 2008

SPECIAL INVITED PAPER

Lasso Regression: Regularization for feature selection

Qualifying Exam in Machine Learning

TDT4173 Machine Learning

Introduction to Machine Learning

Learning with multiple models. Boosting.

A ROBUST ENSEMBLE LEARNING USING ZERO-ONE LOSS FUNCTION

Linear Model Selection and Regularization

Compressed Sensing in Cancer Biology? (A Work in Progress)

Support Vector Machines for Classification: A Statistical Portrait

Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox February 4 th, Emily Fox 2014

Transcription:

Boosted Lasso and Reverse Boosting Peng Zhao University of California, Berkeley, USA. Bin Yu University of California, Berkeley, USA. Summary. This paper introduces the concept of backward step in contrast with forward fashion algorithms like Boosting and Forward Stagewise Fitting. Like classical elimination methods, this backward step works by shrinking the model complexity of an ensemble learner. Through a step analysis, we show that this additional step is necessary for minimizing L 1 penalized loss (Lasso loss). We also propose a BLasso algorithm as a combination of both backward and forward steps which is able to produce the complete regularization path for Lasso problems. Moreover, BLasso can be generalized to solve problems with general convex loss with general convex penalty. 1. Introduction Boosting is one of the most successful and practical learning ideas that recently come from the machine learning community. Since its inception in 1990 (Schapire, 1990; Freund, 1995; Freund and Schapire, 1996), it has been tried on an amazing array of data sets. The motivation for boosting was a procedure that combines the outputs of many weak or base learners to produce a powerful ensemble. The improved performance is impressive. Another successful and vastly popular idea that recently comes from the statistics community is the Lasso (Tibshirani, 1996, 1997). Lasso is a shrinkage method that regularize fitted models using a L 1 penalty. Its popularity can be explained in several ways. Since nonparametric models that fit training data well often have low bias but large variances, prediction accuracy can sometimes be improved by shrinking a model or making a model more sparse. The regularization resulting from the L 1 penalty leads to sparse solutions where there are few basis functions with nonzero weights (among all possible choices). This Statement is proved rigorously in recent works of Donoho and co-authors (1994, 2004) in the specialized setting of over-complete representation and large under-determined systems of linear equations. Furthermore, the sparse models induced by Lasso are more interpretable and often preferred in areas such as Biostatistic and Social Sciences. While it is a natural idea to combine boosting and Lasso to have a regularized boosting procedure, it is also intriguing that boosting, without any additional regularization, has its own resistance to overfitting. For specific cases, e.g. L 2 Boost (Friedman, 2001), this resistance is understood to some extent (Buhlmann and Yu 2002). However, it is not until later when Forward Stagewise Fitting was introduced and connected with a boosting procedure with much more cautious steps (therefore it is also called ǫ-boosting, cf. Friedman 2001) that a shocking similarity between Forward Stagewise Fitting and Lasso was observed (Hastie et al. 2001, Efron et al. 2003). This link between Lasso and Forward Stagewise Fitting is formally described in linear regression case through LARS (Least Angle Regression, Efron et al. 2003). It s also known

2 Peng Zhao et al. that for special cases (e.g. orthogonal design) Forward Stagewise Fitting can approximate Lasso path infinitely close, but in general, they are different from each other despite of their similarity. However, Forward Stagewise Fitting is still used as an approximation to Lasso for different regularization parameters because it is computationally prohibitive to solve Lasso for many regularization parameters even under the linear regression case. Both the similarity and difference between Forward Stagewise Fitting and Lasso can be clearly seen from our analysis. Like many researchers in the statistics community, we take a similar functional steepest descent view of boosting (Breiman 1999, Mason et al. 1999, Friedman et al. 2000, Friedman 2001), but two things distinguish our analysis from previous ones: (a) We look at the Lasso loss, i.e. the original loss plus L 1 penalty, comparing to previous studies where only the loss function is studied. This directly relates every fitting stage with its impact on Lasso loss. For instance, it can be seen that except for special cases like early fitting stages or orthogonal design, a boosting or forward stagewise fitting step can leads to an increase in Lasso loss, which consequently parts the solutions generated by Boosting and Forward Stagewise Fitting from those generated by Lasso. (b) Unlike previous studies, where the steepness of a descending step is judged by gradient, we judge the steepness by the difference of Lasso loss. One technical reason leads to this change is that the Lasso penalty is not differentiable everywhere, so the gradient can not be used. But using the difference is more than just a compromise due to technical difficulty, in fact it leads to a more transparent view on the minimization of Lasso loss and later gives rise to our backward step which is described next. A critical observation from our analysis is that, besides their difference, Forward Stagewise Fitting and Boosting all work in a forward fashion (so is Forward Stagewise Fitting named). The model complexity, measured by the L 1 norm of model parameters, may slightly fluctuate but hold a dominating upward trend for both methods. This is because both methods only proceed by minimizing the empirical loss which gradually builds up the model complexity. Borrowing idea from the Forward Addition and Backward Elimination dual, we construct a backward step. It uses the same minimization rule as the forward step to define each fitting stage but utilizes an additional rule to force the model complexity to decrease. This backward step not only serves as the key to understanding the difference between Forward Stagewise Fitting and Lasso but is also crucial in dealing with situations where the initial model already over-fits. Such situation often rises in online settings when the model fitted from a previous trial need to be re-fitted given new data. As an immediate application of this backward step, we construct an algorithm called Reverse Boosting with purely backward steps starting from an over fitted model and reducing its model complexity. This Reverse Boosting algorithm is conceptually appealing as well as practically effective for shrinking models with high model complexity. Not only does the backward step identifies the similarity and difference between Lasso and Forward Stagewise Fitting, it also reveals an stagewise procedure by which the Lasso solutions can be approximated for general convex loss functions. We call this stagewise procedure Boosted Lasso (BLasso) to relate it to both its ancestors: it uses a stagewise steepest descent procedure similar to boosting and Forward Stagewise Fitting; and, the solutions given by BLasso are close approximates of Lasso solutions with different regularization parameters. Since BLasso has the same order of computational complexity as Forward Stagewise Fitting and unlike Forward Stagewise Fitting, BLasso can be seen to ap-

Boosted Lasso and Reverse Boosting 3 proximate the Lasso solutions infinitely close, we think BLasso has a substantial advantage over Forward Stagewise Fitting in approximating Lasso solutions. The fact that BLasso can also be easily generalized to give regularized path for other penalized loss functions with general convex penalties also comes as a pleasant surprise. (Another algorithm that works in a similar fashion is developed independently by Rosset 2004.) After a brief overview of Boosting, Forward Stagewise Fitting in Section 2.1 and the Lasso in Section 2.2, Section 3 covers our step analysis, introduces the backward step and Reverse Boosting algorithm as a backward compliment of the forward fitting procedures and discusses their application as a regularization method. Section 4 proposes BLasso as an algorithm that approximates the regularization path of Lasso and other penalized loss functions with convex penalties. In section 5, we support the theory and algorithms by simulated and real data sets which demonstrate the attractiveness of Boosted Lasso. Finally, Section 6 contains a discussion on choice of step sizes and application of BLasso in online learning with a summary of the paper. 2. Boosting, Forward Stagewise Fitting and the Lasso The nature of stagewise fitting is responsible to a large extent for boosting s resistance to overfitting. Forward Stagewise Fitting uses more fitting stages by limiting the step size at each stage to a small fixed constant and produces solutions that are strikingly similar to the Lasso. We first give a brief overview of these two algorithms followed by an overview of the Lasso. 2.1. Boosting and Forward Stagewise Fitting The boosting algorithms can be seen as functional gradient descent techniques. The task is to estimate the function F : R d R, minimize an expected lost E[C(Y, F(X))], C(, ) : R R R + (1) based on data Z i = (Y i, X i )(i = 1,..., n). The univariate Y can be continuous (regression problem) or discrete (classification problem). The most prominent examples for the loss function C(, ) include Classification Margin, Logit Loss and L 2 Loss functions. The family of F( ) being considered is the set of ensembles of base learners D = {F : F(x) = m j 1 β j h j (x), x R d, β j R}. (2) For example, the learner h j (, ) could be a decision tree of size 5 where each h j (, ) describes a tree with different splits. Let β = (β 1,...β m ) T, we can reparametrize the problem using L(Z, β) := C(Y, F(X)), (3) where the specification of F is hidden by L and makes our notation simpler. The parameters ˆβ are found by minimizing the empirical loss ˆβ = argmin β L(Z i ; β). (4)

4 Peng Zhao et al. Despite the fact that the empirical loss function is often convex in β, this is usually a formidable optimization problem for a moderately rich function family, and we usually settle for approximating suboptimal solutions by a progressive procedure: (ĵ, ĝ) = arg min j,g L(Z i ; ˆβ t + g1 j ) (5) ˆβ t+1 = ˆβ t + ĝ1ĵ (6) It is often useful to further divide (5) into two parts: Finding g given j which is typically trivial; Finding j this is the difficult part, for which approximate solutions are found. Note also that finding the j entails estimating the g as well. A typical strategy is by applying functional gradient descent. This gradient descent view has been recognized and refined by various authors including Breiman (1999), Mason et al. (1999), Friedman et al. (2000), Friedman (2001) and Buhlmann et al.. The famous AdaBoost, LogitBoost and L 2 Boost are can all be viewed as implementations of this strategy for different loss functions. Forward Stagewise Fitting is a similar method for approximating the minimization problem described by (5) with additional regularization. It disregards the stepsize g in (6) and instead update ˆβ t by a fixed stepsize ǫ: ˆβ t+1 = ˆβ t + ǫ sign(g)1ĵ When this Forward Stagewise Fitting was introduced (Hastie et al. 1999, Efron 2001), it was only described for the L 2 regression setting. We found it more sensible to also remove the minimization over g in (5): (ĵ, ŝ) = arg min j,s=±ǫ L(Z i ; ˆβ t + s1 j ), (7) ˆβ t+1 = ˆβ t + ŝ1ĵ, (8) Notice that this is only a change of form, underlying mechanic of the algorithm remains unchanged in the L 2 regression setting as can be seen later in Section 4. Initially all coefficients are zero. At each successive step, a coefficient is selected that best fits the empirical loss. Its corresponding coefficient βĵ is then incremented or decremented by a small amount, while all other coefficients β j, j ĵ are left unchanged. By taking small cautious steps, Forward Stagewise Fitting imposes some implicit regularization. After applying it with T < iterations, many of the coefficients will be zero, namely those that have yet to be incremented. The others will tend to have absolute values smaller than their corresponding best fits. This shrinkage and sparsity property is explained by the striking similarity between the solutions given by Forward Stagewise Fitting and the Lasso which we give a brief overview next. 2.2. the Lasso Suppose that we have data {Z i = (Y i, X i ); i = 1,..., n}, where X i are the predictor variables and Y i are the responses. As in the usual model fitting set-up we are given some loss function

Boosted Lasso and Reverse Boosting 5 L(Z; β) which is convex in β given Z, the empirical loss is calculated as L(Z i ; β), which is immediately convex in β as well. Most common loss functions such as classification margin, logit loss, L 2 loss and Huber loss all satisfy this convex assumption. The L 1 penalty T(β) is defined as the L 1 norm of β = (β 1,..., β m ) T, T(β) = β 1 = m β i. (9) Letting ˆβ = (ˆβ 1,..., ˆβ m ) T, the Lasso estimate is defined by ˆβ = arg min{ L(Z i ; β)} subject to T(β) t. (10) Here t 0 is a tuning parameter. For some choice of regularization parameter λ this is equivalent to minimizing the Lasso loss: Γ(β; λ) = L(Z i ; β) + λt(β). (11) The parameter λ 0 controls the amount of regularization applied to the estimates. Setting λ = 0 reverses the Lasso problem to minimizing unregularized empirical loss. On the other hand, a very large λ will completely shrink ˆβ to 0 thus leads to an empty model. In general, moderate values of λ will cause shrinkage of the solutions towards 0, and some coefficients may be exactly equal to 0. This sparsity in Lasso solutions has been studied extensively, e.g. Efron et al. (2002), Donoho (1995) and Donoho et al. (2004). Computation of the solution of the Lasso problem for a fixed λ has been studied for special cases. Specifically, for least square regression, it is a quadratic programming problem with linear inequality constraints; for SVM, it can be transformed into a linear programming problem. But to get a good fitted model that performs well on future data, we need to select an appropriate value for the tuning parameter λ. Efficient algorithms have been proposed for least square regression (LARS, Efron et al. 2002) and SVM (1-norm SVM, Ji Zhu 2003) to give the entire regularization path. But how to give the entire regularization path of the Lasso problem for general convex loss function remained open. Moreover, even for the least square and SVM cases, the known algorithms only deals with stationary batch data case. An adaptive online algorithm is needed to deal with real time data. We link the Lasso and Forward Stagewise Fitting through our step analysis on the Lasso loss and an introduction of a backward step in the next Section. This additional backward step completes Forward Stagewise Fitting and gives rise to a BLasso algorithm which will be described in Section 4. 3. Step Analysis and Reverse Boosting One observation is: both Boosting and Forward Stagewise Fitting use only forward steps. They only take steps that lead to direct reduction of the empirical loss. Comparing to clas-

6 Peng Zhao et al. sical model selection methods like Forward Selection and Backward Elimination, Growing and Pruning of a classification tree, a backward counterpart is missing. In this section, we uncover this missing backward step through a step analysis of Stagewise Fitting and the Lasso, and describe a Reverse Boosting algorithm which is completely formed with backward steps. And as will be shown in the next section, it turns out the backward step is the last piece to the puzzle of the similarity between boosting-like procedures resemblance to Lasso solutions. For a given β 0 and λ > 0, consider the impact of a small ǫ > 0 change of β j to the Lasso loss Γ(β; λ). For an s = ǫ, j Γ = ( L(Z i ; β + s1 j ) := j ( L(Z i ; β)) + λ(t(β + s1 j ) T(β)) (12) L(Z i ; β)) + λ j T(β). (13) Since T(β) is simply the L 1 norm of β, T(β) reduces to a simple form: j T(β) = β + s1 j 1 β 1 = β j + s β j = sign + (sβ j ) ǫ (14) where sign + is just a normal sign function except that 0 is projected to 1, i.e. sign + (x) = 1 if x 0 and sign + = 1 if x < 0. Equation (14) shows that a ǫ step s impact on penalty is a fixed ǫ for different j. Only the sign of the impact may vary. Suppose given a β, the forward steps for different j have impacts on the penalty of the same sign, then j T is a constant in (13) for all j. Thus, minimizing the Lasso loss using fixed-size steps is equivalent to minimizing the empirical loss directly. At the early stages of Forward Stagewise Fitting, all forward steps are parting from zero, therefore all the signs of the forward steps impact on penalty are positive. As the algorithm proceeds into later stages, some of the signs may change into negative and minimizing the empirical loss is no longer equivalent to minimizing the Lasso loss. Thus, in the beginning, Forward Stagewise Fitting carries out a steepest descent algorithm that minimizes the Lasso loss and follows Lasso s regularization path, but as it goes into later stages, the equivalence is broken and they part ways. In fact, except for special cases like orthogonal designed covariates, the signs of the forward steps impacts on penalty can change from positive to negative. These steps then reduce the empirical loss and penalty simultaneously therefore they should be prefered over other forward steps. Moreover, there can also be occasions where a step goes backward to reduce the penalty with a small sacrifice in empirical loss. In general, to minimize the Lasso loss, one need to go back and forth to trade-off the penalty with empirical loss basing on different regularization parameters. We call a direction that leads to reduction of the penalty a backward direction and define a backward step as the following: For a given ˆβ, a backward step is such that: ˆβ = s j 1 j, for some j, subject to ˆβ j 0, sign(s) = sign(ˆβ j ) and s = ǫ. Making such a step will reduce the penalty by a fixed amount λ ǫ, but its impact on the empirical loss may vary,

Boosted Lasso and Reverse Boosting 7 therefore we also want: ĵ = arg min L(Z i ; ˆβ + s j 1 j ) subject to ˆβ j 0 and s j = sign(ˆβ j )ǫ, j i.e. ĵ is picked such that the empirical loss after making the step is as small as possible. While forward steps try to reduce the Lasso loss through minimizing the empirical loss, the backward steps try to reduce the Lasso loss through minimizing the Lasso penalty. However, unlike other backward-forward duals, here the backward and forward steps do not necessarily contradict with each other. During a fitting process, although rarely happen, it is possible to have a step reduce both the empirical loss and the Lasso penalty. We do not distinguish these steps that are both forward and backward, and they will not create any trouble during our study. Starting from a fitted model, identified by parameters ˆβ 0, using purely backward steps we have the following algorithm: Reverse Boosting Step 1 (initialization). Given data Z i = (Y i, X i ); i = 1,..., n, a small stepsize constant ǫ > 0 and starting point ˆβ 0 = (ˆβ 0 1,..., ˆβ 0 m )T, set the initial active index set: I 0 A = {j; ˆβ 0 j 0}. Set iteration counter t = 0. Step 2 (Backward Stepping). Find which backward step to make, ĵ = arg min j I t A L(Z i ; ˆβ t + s j 1 j ) where s j = sign(ˆβ j t )ǫ. In case when ˆβ ṱ is within a stepsize away from zero, it is set to 0 and the active index set j is reduced, otherwise do a normal backward step: If ˆβ ṱ ǫ, j Otherwise ˆβt+1 = ˆβ t ˆβ t ĵ 1 ĵ and I t+1 A = It A/{ĵ}; ˆβ t+1 = ˆβ t + sĵ1ĵ. Step 3 (Iteration). Increase t by one and repeat Step 2 and 3. Stop when I t A =. Comparing to Forward Stagewise Fitting, Reverse Boosting is greedy in an opposite way. It always shrinks the model ˆβ t strictly decreases and eventually reaches 0 after roughly ˆβ 0 ǫ iterations. It can be used for pruning a fully boosted additive tree or in general shrinking over-fitted models. However, as Forward Stagewise Fitting uses only forward steps, Reverse Boosting is equally incomplete since it uses only backward steps. On the other hand, the backward step is a greatly important concept. It provides the freedom of going back instead of having to go forward. Models can be fitted using starting points other than 0. For example, to update a model using new data, or to fit models adaptively for time series, the previous fitted model can be used as starting point and it will take only very few iterations to fit the new model. And, by combining both forward and backward steps we introduce the following Boosted Lasso algorithm.

8 Peng Zhao et al. 4. Steepest Descent on Lasso Loss and Boosted Lasso Mixing up the forward and backward steps and calculate the regularization parameter λ along the way gives rise to the Boosted Lasso (BLasso) algorithm. As the algorithm proceeds through iterations, Lasso solutions along the whole regularization path are produced. Boosted Lasso Step 1 (initialization). Given data Z i = (Y i, X i ); i = 1,..., n and a small stepsize constant ǫ > 0, take an initial forward step (ĵ, ŝĵ) = arg min ˆβ 0 = ŝĵ1ĵ, j,s=±ǫ Then calculate the initial regularization parameter λ 0 = 1 n ǫ ( L(Z i ; 0) L(Z i ; s1 j ), L(Z i ; ˆβ 0 )). Set the active index set IA 0 = {ĵ}. Set t = 0. Step 2 (Backward and Forward steps). Find the backward step that leads to the minimal empirical loss ĵ = arg min j I t A L(Z i ; ˆβ t + s j 1 j ) where s j = sign(ˆβ j)ǫ. t Take the step if it leads to a decrease in the Lasso loss, otherwise force a forward step and relax λ if necessary: If Γ(ˆβ t + ŝĵ1ĵ; λ t ) < Γ(ˆβ t, λ t ), then Otherwise, ĵ = arg min j ˆβ t+1 = ˆβ t + ŝĵ1ĵ, λ t+1 = λ t. L(Z i ; ˆβ t + sign(ˆβ j) t ǫ1 j ), ˆβ t+1 = ˆβ t + sign(ˆβ ṱ j )1 ĵ, λ t+1 = min[λ t, 1 n ǫ ( L(Z i ; ˆβ t ) I t+1 A = I t A {ĵ}. L(Z i ; ˆβ t+1 ))], Step 3 (iteration). Increase t by one and repeat Step 2 and 3. Stop when λ t 0. BLasso works by choosing from both backward and forward steps by looking at the Lasso loss and relaxes the regularization once the minimum of Lasso loss under the current

Boosted Lasso and Reverse Boosting 9 regularization is reached. As the algorithm proceeds, solutions from each iterations coincide with the solutions along the complete Lasso regularization path. From an optimization point of view, for a given λ the BLasso solves the Lasso problem as a convex optimization problem by a fixed stepsize coordinate descent using both forward and backward steps. When the minimum of the Lasso loss is reached, a forward step is forced and λ is recalculated. Since for two adjacent λ, the corresponding Lasso solutions are very close, the simple fixed stepsize coordinate descent algorithm takes only one to a few steps to reach the next solution. And since no algorithm other than fixed step coordinate descent is used, BLasso can be used for a wide range of convex loss functions and can be generalized to work for general convex penalties as well. We defer the the generalization to the end of this section and take a more careful look at a specific loss function. For the most common special case least square regression, the forward steps, backward steps and BLasso all become simpler and more intuitive. To see this, we write out the empirical loss function L(Z i ; β) in its L 2 form, L(Z i ; β) = (Y i X i β) 2 = (Y i Ŷi) 2 = ηi 2. where Ŷ = (Ŷ1,..., Ŷn) T are the fitted values and η = (η 1,..., η n ) T are the residuals. Recall that in a penalized regression setup X i = (X i1,..., X im ) where every covariates X j = (X 1j,..., X nj ) T is normalized, i.e. X j 2 = n X2 ij = 1 and n X ij = 0. For a given β = (β 1,...β m ) T, the impact of a step s of size s = ǫ along β j on the empirical loss function can be written as: ( L(Z i ; β)) = [(Y i X i (β + s1 j )) 2 (Y i X i β) 2 ] = = [(η i sx i 1 j ) 2 ηi 2 ] ( 2sη i X ij + s 2 Xij) 2 = 2s(η X j ) + s 2. The last line of these equations delivers a strong message in least square regression, given the step size, the impact on the empirical loss function is solely determined by the correlation between the fitted residuals and the coordinate. Specifically, it is proportional to the negative correlation between the fitted residuals and the covariate plus the step size squared. Therefore, steepest descent with a fixed step size on the empirical loss function is equivalent to finding the covariate that has the maximum size of correlation with the fitted residuals, then proceed along the same direction. This is in principle same as Forward Stagewise Fitting. Translate this for the forward step where originally (ĵ, ŝĵ) = arg min L(Z i ; β + s1 j ), we get j,s=±ǫ ĵ = arg max η X j and ŝ = sign(η Xĵ)ǫ, j

10 Peng Zhao et al. which coincides exactly with the stagewise procedure described in Efron (2002) and is in general the same principle as L 2 Boosting, i.e. recursively refitting the regression residuals along the most correlated direction except the difference in step size choice (Friedman 2001, Buhlmann and Yu 2002). Also, under this simplification, a backward step becomes ĵ = arg min j ( s(η X j )) subject to ˆβ j 0 and s j = sign(ˆβ j )ǫ. Ultimately, since both forward and back steps are based solely on the correlations between fitted residuals and the covariates, therefore in the L 2 case, BLasso reduces to finding the best directions in both forward and backward directions by examining the correlations, then decide whether to go forward or backward based on the regularization parameter. As stated earlier, BLasso not only works for general convex loss functions, it can also be generalized for convex penalties other than L 1 penalty. For the Lasso problem, BLasso algorithm does a fixed step size coordinate descent to minimize the penalized loss. Since the penalty has the special L 1 norm and (14) holds, therefore the coordinate descent takes form of backward and forward steps. For general convex penalties, this nice feature is lost but the algorithm still works. Assume T(β): R m R is a penalty function and is convex in β, now we describe the Generalized Boosted Lasso algorithm: Generalized Boosted Lasso (BLasso) Step 1 (initialization). Given data Z i = (Y i, X i ); i = 1,..., n and a small stepsize constant ǫ > 0, take an initial forward step (ĵ, ŝĵ) = arg min ˆβ 0 = ŝĵ1ĵ. j,s=±ǫ L(Z i ; s1 j ), Then calculate the corresponding regularization parameter n λ 0 = L(Z i; 0) n L(Z i; ˆβ 0 ). T(ˆβ 0 ) T(0) Set t = 0. Step 2 (steepest descent on Lasso loss). Find the steepest coordinate descent direction on Lasso loss (ĵ, ŝĵ) = arg min j,s=±ǫ Γ(ˆβ t + s1 j ; λ t ). Update ˆβ if it reduces Lasso loss; otherwise force ˆβ to minimize the empirical loss and recalculate the regularization parameter : If Γ(ˆβ t + ŝĵ1ĵ; λ t ) < Γ(ˆβ t, λ t ), then Otherwise, ˆβ t+1 = ˆβ t + ŝĵ1ĵ, λ t+1 = λ t. ĵ = arg min j L(Z i ; ˆβ t + sign(ˆβ j t ) ǫ1 j),

Boosted Lasso and Reverse Boosting 11 ˆβ t+1 = ˆβ t + sign(ˆβ ṱ j )1 ĵ, n λ t+1 = min[λ t, L(Z i; ˆβ t ) n L(Z i; ˆβ t+1 ) ]. T(ˆβ t+1 ) T(ˆβ t ) Step 3 (iteration). Increase t by one and repeat Step 2 and 3. Stop when λ t 0. In the Generalized Boosted Lasso algorithm, explicit forward or backward steps are no longer seen. However, the mechanic remains the same minimize the penalized loss function for each λ, relax the regularization by reducing λ when the minimal is reached. Another algorithm of a similar fashion is developed independently by Rosset (2004). There, an incremental quadratic algorithm is used to minimize the penalized loss for each λ; then λ is relaxed by a fixed amount each time the minimal is reached. The algorithm involves calculation of the Hessian matrix and an additional step size parameter for relaxing the λ. In comparison, BLasso uses much simpler and computationally less intensive operations and λ is calculated automatically through the process. Also, for situations like boosting trees where the number of basic learners is huge and at each step the minimization of empirical loss is done through a clever approximation, BLasso is able to adapt naturally in the same way as Forward Stagewise Fitting. 5. Experiments Three different experiments are carried out to illustrate BLasso with both simulated and real datasets. We first run BLasso on a diabetes dataset (cf Efron et al. 2002) under the classical Lasso setting, i.e. L 2 regression with an L 1 penalty. BLasso completely replicates the entire regularization path of the Lasso. Then we run BLasso on the same dataset but under an unconventional setting L 1 regression with L 2 penalty to show that BLasso works for a general problem with a convex loss function and a convex penalty. At last, switching from regression to classification, we use simulated data to illustrate BLasso solving regularized classification problem under the 1-norm SVM setting. 5.1. L 2 Regression with L 1 Penalty (Classical Lasso) The dataset used in this and the following experiment is from a Diabetes study where diabetes patients were measured on 10 baseline variables. A prediction model was desired for the response variable. The ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline. The statisticians were asked to construct a model that predicted response Y from covariates X 1, X 2,..., X 10. Two hopes were evident, that the model would produce accurate baseline predictions of response for future patients, and also that the form of the model would suggest which covariates were important factors in disease progression. The classical Lasso L 2 regression with L 1 penalty is used for this purpose. Let X 1, X 2,..., X m be n vectors representing the covariates and Y the vector of responses for the n cases, m = 10 and n = 442 in this study. Location and scale transformations are done so that all covariates are standardized to have mean 0 and unit length, and that the response has mean zero.

12 Peng Zhao et al. The penalized loss function has the form: Γ(β; λ) = (Y i X i β) 2 + λ β 1 (15) BLasso Lasso 500 500 0 0 500 500 0 1000 2000 3000 t = ˆβ j 0 1000 2000 3000 t = ˆβ j Figure 1. Estimates of regression coefficients ˆβ j, j=1,2,...10, for the diabetes data. Left Panel Lasso solutions (given by LARS) as a function of t = β 1. The covariates enter the regression equation sequentially as t increase, in order j = 3, 9, 4, 7,...1. Right Panel BLasso solutions, which can be seen identical to the Lasso solutions. The left and right panels of Figure 1 are virtually identical which empirically verifies that BLasso replicates the Lasso regularization path. 5.2. L 1 Regression with L 2 Penalty To illustrate BLasso in an unconventional setting, we run an L 1 regression with L 2 penalty on the same diabetes data in the previous section. Instead of (15), the penalized loss function now takes the form: where β 2 = ( m j=1 β2 j )1 2 Γ(β; λ) = Y i X i β + λ β 2 (16)

Boosted Lasso and Reverse Boosting 13 400 0 400 0 300 600 900 t = ( m j=1 β2 j )1 2 Figure 2. Estimates of regression coefficients ˆβ j, j=1,2,...10, for the diabetes data as in (16). Solutions are plotted as functions of t = ( m j=1 β2 j )1 2. Figure 2 shows the results given by BLasso for solving (16). The difference between Figure 1 and Figure 2 is significant. Because L 2 penalty is used in (16), there is no sparsity in the regression coefficient for any λ. 5.3. Classification with 1-norm SVM (Hinge Loss) In addition to the regression experiments in the previous two sections, we now look at binary classification. We generate 50 training data in each of two classes. The first class has two standard normal independent inputs X 1 and X 2 and class label Y = 1. The second class also has two standard normal independent inputs, but conditioned on 4.5 (X 1 ) 2 + (X 2 ) 2 8 and has class label Y = 1. We wish to find a classification rule from the training data. so that when given a new input, we can assign a class Y from {1, 1} to it. To handle this problem, 1-norm SVM (Zhu 2003) is considered: (ˆβ 0, β) = argmin β 0,β m (1 Y i (β 0 + β j h j (X i ))) + + λ β 1 (17) where h i are basis functions and λ is the regularization parameter. The dictionary of basis functions considered here is D = { 2X 1, 2X 2, 2X 1 X 2, (X 1 ) 2, (X 2 ) 2 }. The fitted model is ˆf(x) = ˆβ m 0 + ˆβ j h j (x). The classification rule is given by sign( ˆf(x)). j=1 j=1

14 Peng Zhao et al. Regularization Path 0.8 3 Data 0.7 0.6 2 0.5 0.4 1 0.3 0 0.2 0.1 1 0 0.1 2 0.2 0 0.5 1 1.5 t = 5 j=1 ˆβ j 3 3 2 1 0 1 2 3 Figure 3. Estimates of 1-norm SVM coefficients ˆβ j, j=1,2,...10, for the simulated two-class classification data. Left Panel BLasso solutions as a function of t = 5 j=1 ˆβ j. Right Panel Scatter plot of the data points with labels: + for y = 1; o for y = 1. The covariates enter the regression equation sequentially as t increase, in the following order: the two quadratic terms first, followed by the interaction term then the two linear terms. The penalty used here is the L 1 norm same as the Lasso. However, the constant term coefficient β 0 is not penalized but also needs to be estimated. Since the penalty function is still convex, BLasso gives the solutions successfully. 6. Discussion and Concluding Remarks As seen from the experiments, BLasso is effective for solving the Lasso problem and general convex penalized loss minimization problems. One practical issue left undiscussed is the choice of stepsize. In general, BLasso take O(1/ǫ) steps. For simple L 2 regression with m covariates, each step uses O(m n) basic operations. Depend on the actual loss function, base learners and minimization trick used in each step, the actual computation complexity varies. Nonetheless, smaller steps give smoother and closer approximation to the regularization path but also invokes more computation complexity. However, we observe that the the actual coefficient estimates are pretty accurate even for relatively large step sizes.

Boosted Lasso and Reverse Boosting 15 400 200 0 2000 1500 1000 500 0 λ Figure 4. Estimates of regression coefficients ˆβ 3 for the diabetes data as in Section 4.1. Solutions are plotted as functions of λ. Dotted Line Estimates using step size ǫ = 0.05. Solid Line Estimates using step size ǫ = 10. Dash-dot Line Estimates using step size ǫ = 50. As can be seen from Figure 4, for small step size ǫ = 0.05, the solution path can not be distinguished from the exact regularization path. However, even when the step size is as large as ǫ = 10 and ǫ = 50 (200 and 1000 times bigger), the solutions are still good approximations. As mentioned in section 3, BLasso has only one step size parameter. This parameter controls both how close BLasso approximates the minimization coefficients for each λ and how close two adjacent λ on the regularization path are placed. As can be seen from Figure 4, a smaller stepsize leads to a closer approximation to the solutions and also finer grids for λ. We argue that, if λ is sampled on a coarse grid there is no point of wasting computational power on finding a much more accurate approximation of the coefficients for each λ. Instead, the available computational power spent on these two coupled tasks should be balanced. BLasso s 1-parameter setup automatically balances these two aspects of the approximation which is graphically expressed by the staircase shape of the solution paths. Another rich topic that deserves a closer look is to apply BLasso in an online setting. Since BLasso has both forward and backward steps, it should be easily transformed into an adaptive online learning algorithm where it goes back and forth to track the best regularization parameter and the corresponding model. In this paper, we introduced a backward step as a necessary compliment to forward fitting procedures like Forward Stagewise Fitting for minimizing Lasso loss. We also proposed the Reverse Boosting and the Boosted Lasso algorithms while the latter is able to produce the complete regularization path for general convex loss function with convex penalty. To summarize, we showed that

16 Peng Zhao et al. (a) A backward step is necessary to for boosting like procedures to produce the Lasso regularization path. (b) The construction of such a backward step involves reducing the Lasso penalty and trying to keep the increase of empirical loss as small as possible. (c) Using only the backward steps, a Reverse Boosting algorithm can be constructed which is concise, computationally light and similar to the classical backward elimination algorithm. (d) Combining both forward and backward steps and trading off the Lasso penalty with empirical loss, a Boosted Lasso (BLasso) algorithm can be constructed to produce the Lasso solutions for general convex loss functions. (e) The BLasso algorithm can be generalized to efficiently compute the regularization path for general convex loss function with convex penalty. (f) The results are verified by experiments using both real and simulated data for both regression and classification problems. References [1] Breiman, L. (1998). Arcing Classifiers, Ann. Statist. 26, 801-824. [2] Breiman, L. (1999). Prediction Games and Arcing Algorithms, Neural Computation 11, 1493-1517. [3] Buhlmann, P. and Yu, B. (2001). Boosting with the L2 Loss: Regression and Classification, J. Am. Statist. Ass. 98, 324-340. [4] Donoho, D. and Elad, M. (2004). Optimally sparse representation in general(nonorthogonal) dictionaries vy l 1 minimization, Technical reports, Statistics Department, Stanford University. [5] Efron, B., Hastie,T., Johnstone, I. and Tibshirani, R. (2002). Least Angle Regression, Ann. Statist. 32 (2004), no. 2, 407-499. [6] Freund, Y. (1995). Boosting a weak learning algorithm by majority, Information and Computation 121, 256-285. [7] Freund, Y. and Schapire, R.E. (1996). Experiments with a new boosting algorithm, Machine Learning: Proc. Thirteenth International Conference, pp. 148-156. Morgan Kauffman, San Francisco. [8] Friedman, J.H., Hastie, T. and Tibshirani, R. (2000). Additive Logistic Regression: a Statistical View of Boosting, Ann. Statist. 28, 337-407. [9] Friedman, J.H. (2001). Greedy Function Approximation: a Gradient Boosting Machine, Ann. Statist. 29, 1189-1232. [10] Hansen, M. and Yu, B. (2001). Model Selection and the Principle of Minimum Description Length, J. Am. Statist. Ass. Vol. 96, 746-774. [11] Hastie, T., Tibshirani, R. and Friedman, J.H. (2001).The Elements of Statistical Learning: Data Mining, Inference and Prediction, Springer Verlag, New York

Boosted Lasso and Reverse Boosting 17 [12] Li, S. and Zhang, Z. (2004). FloatBoost Learning and Statistical Face Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence Vol 26, 1112-1123 [13] Mason, L., Baxter, J., Bartlett, P. and Frean, M. (1999). Functional Gradient Techniques for Combining Hypotheses, In Advance in Large Margin Classifiers. MIT Press. [14] Rosset, S. (2004). Tracking Curved Regularized Optimization Solution Paths, NIPS 2004, to appear. [15] Schapire, R.E. (1990). The Strength of Weak Learnability. Machine Learning 5(2), 1997-227. [16] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso, J. R. Statist. Soc. B, Vol. 58, No. 1., pp. 267-288. [17] Zhu, J. Rosset, S., Hastie, T. and Tibshirani, R.(2003) 1-norm Support Vector Machines, Advances in Neural Information Processing Systems 16. MIT Press.