Foundation of Intelligent Systems, Part I. Regression 2

Similar documents
Statistical Machine Learning, Part I. Regression 2

Foundation of Intelligent Systems, Part I. Regression

SCMA292 Mathematical Modeling : Machine Learning. Krikamol Muandet. Department of Mathematics Faculty of Science, Mahidol University.

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

Linear Models for Regression CS534

CS540 Machine learning Lecture 5

Week 3: Linear Regression

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

Linear Models for Regression CS534

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

DS-GA 1002 Lecture notes 10 November 23, Linear models

ISyE 691 Data mining and analytics

MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2

Linear Regression and Its Applications

Machine Learning for Biomedical Engineering. Enrico Grisan

STAT 100C: Linear models

A Modern Look at Classical Multivariate Techniques

Ridge Regression 1. to which some random noise is added. So that the training labels can be represented as:

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

Machine Learning (CS 567) Lecture 2

ECE521 week 3: 23/26 January 2017

Statistical Geometry Processing Winter Semester 2011/2012

Linear Models for Regression CS534

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage

Tutorial 6: Linear Regression

Data Analysis and Machine Learning Lecture 12: Multicollinearity, Bias-Variance Trade-off, Cross-validation and Shrinkage Methods.

Linear regression COMS 4771

Introduction to Bayesian Learning. Machine Learning Fall 2018

Multiple Linear Regression

Methods of Mathematical Physics X1 Homework 2 - Solutions

Lecture 3. Linear Regression

Linear Models in Machine Learning

Lecture 2 Machine Learning Review

Fundamentals of Machine Learning (Part I)

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 9: LINEAR REGRESSION

Data Mining Stat 588

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Homework 5. Convex Optimization /36-725

STAT 100C: Linear models

Association studies and regression

Machine Learning Linear Regression. Prof. Matteo Matteucci

Machine Learning for OR & FE

Regression. Oscar García

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

CMU-Q Lecture 24:

Regression, Ridge Regression, Lasso

Lecture 15. Hypothesis testing in the linear model

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Lecture 6: Linear Regression

[y i α βx i ] 2 (2) Q = i=1

EECS 275 Matrix Computation

Statistical Data Mining and Machine Learning Hilary Term 2016

An Introduction to Statistical and Probabilistic Linear Models

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

CSCI-567: Machine Learning (Spring 2019)

11 Hypothesis Testing

References. Lecture 3: Shrinkage. The Power of Amnesia. Ockham s Razor

The Hilbert Space of Random Variables

14 Multiple Linear Regression

Statistics 203: Introduction to Regression and Analysis of Variance Penalized models

CS6220: DATA MINING TECHNIQUES

(II.B) Basis and dimension

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Introduction to Statistical modeling: handout for Math 489/583

CSC2541 Lecture 2 Bayesian Occam s Razor and Gaussian Processes

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

MS&E 226. In-Class Midterm Examination Solutions Small Data October 20, 2015

CS 4491/CS 7990 SPECIAL TOPICS IN BIOINFORMATICS

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in

cxx ab.ec Warm up OH 2 ax 16 0 axtb Fix any a, b, c > What is the x 2 R that minimizes ax 2 + bx + c

Mathematical statistics

Machine Learning CSE546 Carlos Guestrin University of Washington. October 7, Efficiency: If size(w) = 100B, each prediction is expensive:

Sparse Linear Models (10/7/13)

Lecture 6: Linear Regression (continued)

Linear Models Review

COS513: FOUNDATIONS OF PROBABILISTIC MODELS LECTURE 10

Click Prediction and Preference Ranking of RSS Feeds

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Is the test error unbiased for these programs?

CSCI567 Machine Learning (Fall 2014)

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

Overfitting, Bias / Variance Analysis

Machine Learning. Regularization and Feature Selection. Fabio Vandin November 13, 2017

Regression Models - Introduction

Containment restrictions

Matrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Methods for sparse analysis of high-dimensional data, II

CS6220: DATA MINING TECHNIQUES

CPSC 340: Machine Learning and Data Mining

Scientific Computing: Dense Linear Systems

Regression: Lecture 2

11 : Gaussian Graphic Models and Ising Models

Transcription:

Foundation of Intelligent Systems, Part I Regression 2 mcuturi@i.kyoto-u.ac.jp FIS - 2013 1

Some Words on the Survey Not enough answers to say anything meaningful! Try again: survey. FIS - 2013 2

Last Week Regression: highlight a functional relationship between a predicted variable and predictors FIS - 2013 3

Last Week Regression: highlight a functional relationship between a predicted variable and predictors find a function f such that (x,y) that can appear,f(x) y FIS - 2013 4

Last Week Regression: highlight a functional relationship between a predicted variable and predictors to find an accurate function f such that (x,y) that can appear,f(x) y use a data set & the least-squares criterion: min f F 1 N N (y j f(x j )) 2 j=1 FIS - 2013 5

Last Week Regression: highlight a functional relationship between a predicted variable and predictors when regressing a real number vs a real number : 14 12 Scatter plot of Rent vs. Surface y = 0.13*x + 2.4 y = 0.0033*x 2 + 0.35*x 0.71 y = 6.2e 05*x 3 0.01*x 2 + 0.59*x 3.1 y = 1.4e 06*x 4 0.00016*x 3 + 0.0016*x 2 + 0.33*x 1.2 y = 1.8e 07*x 5 + 3.7e 05*x 4 0.0028*x 3 + 0.092*x 2 1.1*x + 7.1 apartment linear quadratic cubic 4th degree 5th degree Rent (x 10.000 JPY) 10 8 6 4 2 10 20 30 40 50 60 70 80 Surface Least-Squares Criterion L(b,a 1,,a p ) to fit lines, polynomials. results in solving a linear system. 2 nd order(b,a 1,,a p ) a p = linear in (b,a 1,,a p ) When setting L/ a p = 0 we get p+1 linear equations for p+1 variables. FIS - 2013 6

Last Week Regression: highlight a functional relationship between a predicted variable and predictors when regressing a real number vs d real numbers (vector in R d ), find best fit α R n such that (α T x+α 0 ) y. Add to d N data matrix, a row of 1 s to get the predictors X. The row Y of predicted values The Least-Squares criterion also applies: ( L(α) = Y α T X 2 = α T XX T α 2Y X T α+ Y 2). α L = 0 α = (XX T ) 1 XY T This works if XX T R d+1 is invertible. FIS - 2013 7

Last Week >> (X X )\(X Y ) ans = 0.049332605603095 x age 0.163122792160298 x surface 0.004411580036614 x distance 2.731204399433800 + 27.300 JPY FIS - 2013 8

Today A statistical / probabilistic perspective on LS-regression A few words on polynomials in higher dimensions A geometric perspective Variable co-linearity and Overfitting problem Some solutions: advanced regression techniques Subset selection Ridge Regression Lasso FIS - 2013 9

A (very few) words on the statistical/probabilistic interpretation of LS FIS - 2013 10

The Statistical Perspective on Regression Assume that the values of y are stochastically linked to observations x as y (α T x+β) N(0,σ). This difference is a random variable called ε and is called a residue. FIS - 2013 11

The Statistical Perspective on Regression This can be rewritten as, y = (α T x+β)+ε, ε N(0,σ), We assume that the difference between y and (α T x+b) behaves like a Gaussian (normally distributed) random variable. Goal as a statistician: Estimate α and β given observations. FIS - 2013 12

Identically Independently Distributed (i.i.d) Observations Statistical hypothesis: assume that the parameters are α = a,β = b FIS - 2013 13

Identically Independently Distributed (i.i.d) Observations Statistical hypothesis: assume that the parameters are α = a,β = b In such a case, what would be the probability of each observation (x j,y j )? FIS - 2013 14

Identically Independently Distributed (i.i.d) Observations Statistical hypothesis: assuming that the parameters are α = a,β = b, what would be the probability of each observation?: For each couple (x j,y j ), j = 1,,N, P(x j,y j α = a,β = b) = 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 FIS - 2013 15

Identically Independently Distributed (i.i.d) Observations Statistical hypothesis: assuming that the parameters are α = a,β = b, what would be the probability of each observation?: For each couple (x j,y j ), j = 1,,N, P(x j,y j α = a,β = b) = 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 Since each measurement (x j,y j ) has been independently sampled, P ({(x j,y j )} j=1,,n α = a,β = b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 FIS - 2013 16

Identically Independently Distributed (i.i.d) Observations Statistical hypothesis: assuming that the parameters are α = a,β = b, what would be the probability of each observation?: For each couple (x j,y j ), j = 1,,N, P(x j,y j α = a,β = b) = 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 Since each measurement (x j,y j ) has been independently sampled, P ({(x j,y j )} j=1,,n α = a,β = b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 A.K.A likelihood of the dataset {(x j,y j ) j=1,,n } as a function of a and b, L {(xj,y j )}(a,b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 FIS - 2013 17

Maximum Likelihood Estimation (MLE) of Parameters Hence, for a,b, the likelihood function on the dataset {(x j,y j ) j=1,,n }... L(a,b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 FIS - 2013 18

Maximum Likelihood Estimation (MLE) of Parameters Hence, for a,b, the likelihood function on the dataset {(x j,y j ) j=1,,n }... L(a,b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2 Why not use the likelihood to guess (a,b) given data? FIS - 2013 19

Maximum Likelihood Estimation (MLE) of Parameters Hence, for a,b, the likelihood function on the dataset {(x j,y j ) j=1,,n }... L(a,b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2...the MLE approach selects the values of (a,b) which mazimize L(a,b) FIS - 2013 20

Maximum Likelihood Estimation (MLE) of Parameters Hence, for a,b, the likelihood function on the dataset {(x j,y j ) j=1,,n }... L(a,b) = N j=1 1 2πσ exp ( y j (a T x j +b) 2 ) 2σ 2...the MLE approach selects the values of (a,b) which mazimize L(a,b) Since max (a,b) L(a,b) max (a,b) logl(a,b) logl(a,b) = C 1 2σ 2 N y j (a T x j +b) 2. j=1 Hence max (a,b) L(a,b) min (a,b) N j=1 y j (a T x j +b) 2... FIS - 2013 21

Statistical Approach to Linear Regression Properties of the MLE estimator: convergence of α a? Confidence intervals for coefficients, Tests procedures to assess if model fits the data, Residues Histogram Relative Frequency 0.1 0.08 0.06 0.04 0.02 0 2 1 0 1 2 3 4 x x 10 Bayesian approaches: instead of looking for one optimal fit (a, b) juggle with a whole density on (a,b) to make decisions etc. FIS - 2013 22

A few words on polynomials in higher dimensions FIS - 2013 23

A few words on polynomials in higher dimensions For d variables, that is for points x R d, the space of polynomials on these variables up to degree p is generated by {x u u N d,u = (u 1,,u d ), d u i p} i=1 where the monomial x u is defined as x u 1 1 xu 2 2 xu d d Recurrence for dimension of that space: dim p+1 = dim p + ( p+1 d+p For d = 20 and p = 5, 1+20+210+1540+8855+42504 > 50.000 ) Problem with polynomial interpolation in high-dimensions is the explosion of relevant variables (one for each monomial) FIS - 2013 24

Geometric Perspective FIS - 2013 25

Back to Basics Recall the problem: X = 1 1 1... x 1 x 2 x N Rd+1 N... and Y = [ y 1 y N ] R N. We look for α such that α T X Y. FIS - 2013 26

Back to Basics If we transpose this expression we get X T α Y T, 1 x 1,1 x d,1 1 x 1,2 x d,2.... 1 x 1,k x d,k α 0. =.... α d 1 x 1,N x d,n y 1. y 2. y. y N Using the notation Y = Y T,X = X T and X k for the (k +1) th column of X, d α k X k Y k=0 Note how the X k corresponds to all values taken by the k th variable. Problem: approximate/reconstruct Reconstructing Y R N using X 0,X 1,,X d R N? FIS - 2013 27

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y Consider the observed vector in R N of predicted values FIS - 2013 28

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 0 Plot the first regressor X 0... FIS - 2013 29

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 1 X 0 Assume the next regressor X 1 is colinear to X 0... FIS - 2013 30

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 1 X 0 X 2 and so is X 2... FIS - 2013 31

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 1 X 0 X 2 Very little choices to approximate Y... FIS - 2013 32

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 2 X 1 X 0 Suppose X 2 is actually not colinear to X 0. FIS - 2013 33

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 2 X 1 X 0 This opens new ways to reconstruct Y. FIS - 2013 34

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 2 X 1 X 0 When X 0,X 1,X 2 are linearly independent, FIS - 2013 35

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d Y X 2 X 1 X 0 Y is in their span since the space is of dimension 3 FIS - 2013 36

Linear System Reconstructing Y R N using X 0,X 1,,X d vectors of R N. Our ability to approximate Y depends implicitly on the space spanned by X 0,X 1,,X d The dimension of that space is Rank(X), the rank of X Rank(X) min(d+1,n). FIS - 2013 37

Linear System Three cases depending on RankX and d,n 1. RankX < N. d+1 column vectors do not span R N For arbitrary Y, there is no solution to α T X=Y 2. RankX = N and d+1 > N, too many variables span the whole of R N infinite number of solutions to α T X=Y. 3. RankX = N and d+1 = N, # variables = # observations Exact and unique solution: α = X 1 Y we have α T X=Y In most applications, d+1 N so we are either in case 1 or 2 FIS - 2013 38

Case 1: RankX < N no solution to α T X=Y (equivalently Xα = Y) in general case. What about the orthogonal projection of Y on the image of X Y Ŷ span{x 0,X 1,,X d } Namely the point Ŷ such that Ŷ = argmin Y u. u spanx 0,X 1,,X d FIS - 2013 39

Case 1: RankX < N Lemma 1.{X 0,X 1,,X d } is a l.i. family X T X is invertible FIS - 2013 40

Case 1: RankX < N Computing the projection ˆω of a point ω on a subspace V is well understood. In particular, if (X 0,X 1,,X d ) is a basis of span{x 0,X 1,,X d }... (that is {X 0,X 1,,X d } is a linearly independent family)... then (X T X) is invertible and... Ŷ = X(X T X) 1 X T Y This gives us the α vector of weights we are looking for: Ŷ = X(X T X) 1 X T Y }{{} ˆα = Xˆα Y or ˆα T X = Y What can go wrong? FIS - 2013 41

Case 1: RankX < N If X T X is invertible, Ŷ = X(X T X) 1 X T Y If X T X is not invertible... we have a problem. If X T X s condition number λ max (X T X) λ min (X T X), is very large, a small change in Y can cause dramatic changes in α. In this case the linear system is said to be badly conditioned... Using the formula Ŷ = X(X T X) 1 X T Y might return garbage as can be seen in the following Matlab example. FIS - 2013 42

Case 2: RankX = N and d+1 > N high-dimensional low-sample setting Ill-posed inverse problem, the set {α R d Xα = Y} is a whole vector space. We need to choose one from many admissible points. When does this happen? High-dimensional low-sample case (DNA chips, multimedia etc.) How to solve for this? Use something called regularization. FIS - 2013 43

A practical perspective: Colinearity and Overfitting FIS - 2013 44

A Few High-dimensions Low sample settings DNA chips are very long vectors of measurements, one for each gene Task: regress a health-related variable against gene expression levels Image:http://bioinfo.cs.technion.ac.il/projects/Kahana-Navon/DNA-chips.htm FIS - 2013 45

A Few High-dimensions Low sample settings. please = 2. send = 1 Emails represented as bag-of-words email j =. R d money = 2. assignment = 0. Task: regress probability that this is an email against bag-of-words Image: http://clg.wlv.ac.uk/resources/junk-emails/index.php FIS - 2013 46

Correlated Variables Suppose you run a real-estate company. For each apartment you have compiled a few hundred predictor variables, e.g. distances to conv. store, pharmacy, supermarket, parking lot, etc. distances to all main locations in Kansai socio-economic variables of the neighboorhood characteristics of the apartment Some are obviously correlated (correlated= almost colinear) distance to Post Office / distance to Post ATM In that case, we may have some problems (Matlab example) Source: http://realestate.yahoo.co.jp/ FIS - 2013 47

Overfitting Given d variables (including constant variable), consider the least squares criterion j d 2 L d (α 1,,α d ) = y j α i x i,j i=1 i=1 Add any variable vector x d+1,j,j = 1,,N, and define L d+1 (α 1,,α d,α d+1 ) = j y j i=1 d 2 α i x i,j α d+1 x d+1,j i=1 FIS - 2013 48

Overfitting Given d variables (including constant variable), consider the least squares criterion j d 2 L d (α 1,,α d ) = y j α i x i,j i=1 i=1 Add any variable vector x d+1,j,j = 1,,N, and define L d+1 (α 1,,α d,α d+1 ) = j y j i=1 d 2 α i x i,j α d+1 x d+1,j i=1 THEN min α R d+1l d+1(α) min α R dl d (α) FIS - 2013 49

Overfitting Given d variables (including constant variable), consider the least squares criterion j d 2 L d (α 1,,α d ) = y j α i x i,j i=1 i=1 Add any variable vector x d+1,j,j = 1,,N, and define L d+1 (α 1,,α d,α d+1 ) = j y j i=1 d 2 α i x i,j α d+1 x d+1,j i=1 Then min α R d+1l d+1(α) min α R dl d (α) why? L d (α 1,,α d ) = L d+1 (α 1,,α d,0) FIS - 2013 50

Overfitting Given d variables (including constant variable), consider the least squares criterion j d 2 L d (α 1,,α d ) = y j α i x i,j i=1 i=1 Add any variable vector x d+1,j,j = 1,,N, and define L d+1 (α 1,,α d,α d+1 ) = j y j i=1 d 2 α i x i,j α d+1 x d+1,j i=1 Then min α R d+1l d+1(α) min α R dl d (α) why? L d (α 1,,α d ) = L d+1 (α 1,,α d,0) Residual-sum-of-squares goes down... but is it relevant to add variables? FIS - 2013 51

Occam s razor formalization of overfitting Minimizing least-squares (RSS) is not clever enough. We need another idea to avoid overfitting. Occam s razor:lex parsimoniae law of parsimony: principle that recommends selecting the hypothesis that makes the fewest assumptions. one should always opt for an explanation in terms of the fewest possible causes, factors, or variables. Wikipedia: William of Ockham, born 1287- died 1347 FIS - 2013 52

Advanced Regression Techniques FIS - 2013 53

Quick Reminder on Vector Norms For a vector a R d, the Euclidian norm is the quantity a 2 = d a 2 i. More generally, the q-norm is for q > 0, i=1 a q = ( d )1 a i q i=1 q. In particular for q = 1, In the limit q and q 0, a 1 = d a i i=1 a = max i=1,,d a i. a 0 = #{i a i 0}. FIS - 2013 54

Tikhonov Regularization 43 - Ridge Regression 62 Tikhonov s motivation : solve ill-posed inverse problems by regularization If min α L(α) is achieved on many points... consider min α L(α)+λ α 2 2 We can show that this leads to selecting ˆα = (X T X+λI d+1 ) 1 XY The condition number has changed to λ max (X T X)+λ λ min (X T X)+λ. FIS - 2013 55

Subset selection : Exhaustive Search Following Ockham s razor, ideally we would like to know for any value p min L(α) α, α 0 =p select the best vector α which only gives weights to p variables. Find the best combination of p variables. Practical Implementation For p n, ( n p) possible combinations of p variables. Brute force approach: generate ( n p) regression problems and select the one that achieves the best RSS. Impossible in practice with moderately large n and p... ( 30 5) = 150.000 FIS - 2013 56

Subset selection : Forward Search Since the exact search is intractable in practice, consider the forward heuristic In Forward search: define I 1 = {0}. given a set I k {0,,d} of k variables, what is the most informative variable one could add? Compute for each variable i in {0,,d}\I k t i = min (α k ) k Ik,α N y j j=1 k x k,j +αx i,j k Ikα 2 Set I k+1 = I k {i } for any i such that i = min t i. k = k +1 until desired number of variablse FIS - 2013 57

Subset selection : Backward Search... or the backward heuristic In Backward search: define I d = {0,1,,n}. given a set I k {0,,d} of k variables, what is the least informative variable one could remove? Compute for each variable i in I k t i = min (α k ) k Ik \{i} N y j j=1 k I k \{i} α k x k,j 2 Set I k 1 = I k \{i } for any i such that i = max t i. k = k 1 until desired number of variables FIS - 2013 58

Subset selection : LASSO Naive Least-squares min α L(α) Best fit with p variables (Occam!) min L(α) α, α 0 =p Tikhonov regularized Least-squares min α L(α)+λ α 2 2 LASSO (least absolute shrinkage and selection operator) min α L(α)+λ α 1 FIS - 2013 59