Spectral Regularization

Similar documents
Regularization via Spectral Filtering

2 Tikhonov Regularization and ERM

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces

Oslo Class 4 Early Stopping and Spectral Regularization

RegML 2018 Class 2 Tikhonov regularization and kernels

Reproducing Kernel Hilbert Spaces

Spectral Filtering for MultiOutput Learning

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Oslo Class 2 Tikhonov regularization and kernels

TUM 2016 Class 3 Large scale learning by regularization

Sparse Approximation and Variable Selection

Reproducing Kernel Hilbert Spaces

MLCC 2015 Dimensionality Reduction and PCA

Spectral Algorithms for Supervised Learning

Less is More: Computational Regularization by Subsampling

The Learning Problem and Regularization Class 03, 11 February 2004 Tomaso Poggio and Sayan Mukherjee

Less is More: Computational Regularization by Subsampling

Reproducing Kernel Hilbert Spaces

Convergence of Eigenspaces in Kernel Principal Component Analysis

Machine Learning 4771

4 Bias-Variance for Ridge Regression (24 points)

MLCC 2017 Regularization Networks I: Linear Models

Approximate Kernel PCA with Random Features

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

Regularization Algorithms for Learning

9.520 Problem Set 2. Due April 25, 2011

Class 2 & 3 Overfitting & Regularization

Learning gradients: prescriptive models

14 Singular Value Decomposition

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

Manifold Regularization

Introductory Machine Learning Notes 1

6.036 midterm review. Wednesday, March 18, 15

Stochastic optimization in Hilbert spaces

Introductory Machine Learning Notes 1. Lorenzo Rosasco

Indirect Rule Learning: Support Vector Machines. Donglin Zeng, Department of Biostatistics, University of North Carolina

MIT 9.520/6.860, Fall 2018 Statistical Learning Theory and Applications. Class 04: Features and Kernels. Lorenzo Rosasco

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Regularized Least Squares

Online Gradient Descent Learning Algorithms

Approximate Kernel Methods

Convergence rates of spectral methods for statistical inverse learning problems

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection

IV. Matrix Approximation using Least-Squares

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

Using SVD to Recommend Movies

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Singular Value Decomposition

Foundations of Computer Vision

The Learning Problem and Regularization

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Diffeomorphic Warping. Ben Recht August 17, 2006 Joint work with Ali Rahimi (Intel)

CS540 Machine learning Lecture 5

Introductory Machine Learning Notes 1. Lorenzo Rosasco

2.3. Clustering or vector quantization 57

15 Singular Value Decomposition

Unsupervised dimensionality reduction

Notes on Regularized Least Squares Ryan M. Rifkin and Ross A. Lippert

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

Classification. The goal: map from input X to a label Y. Y has a discrete set of possible values. We focused on binary Y (values 0 or 1).

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Linear Algebra for Machine Learning. Sargur N. Srihari

Hilbert Space Methods in Learning

Machine Learning (Spring 2012) Principal Component Analysis

Lecture 2: Linear Algebra Review

MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design

MATH 583A REVIEW SESSION #1

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

Principal Component Analysis

Introduction to Machine Learning

Linear Models in Machine Learning

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

MATH 341 MIDTERM 2. (a) [5 pts] Demonstrate that A and B are row equivalent by providing a sequence of row operations leading from A to B.

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

Introduction to Machine Learning (67577) Lecture 7

Machine Learning. B. Unsupervised Learning B.2 Dimensionality Reduction. Lars Schmidt-Thieme, Nicolas Schilling

7 Principal Component Analysis

Principal Component Analysis (PCA) for Sparse High-Dimensional Data

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Basic Calculus Review

Regularized Least Squares

Least Squares Optimization

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Kernel Methods. Jean-Philippe Vert Last update: Jan Jean-Philippe Vert (Mines ParisTech) 1 / 444

Linear Regression Linear Regression with Shrinkage

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

CS229 Final Project. Wentao Zhang Shaochuan Xu

Tutorial on Machine Learning for Advanced Electronics

Linear Regression Linear Regression with Shrinkage

Machine Learning - MT & 14. PCA and MDS

PCA, Kernel PCA, ICA

SIGNAL AND IMAGE RESTORATION: SOLVING

Kernel Principal Component Analysis

Linear Regression. Aarti Singh. Machine Learning / Sept 27, 2010

Linear Regression. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Linear Inverse Problems

Transcription:

Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008

About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems, that give rise to regularized learning algorithms. These algorithms are kernel methods that can be easily implemented and have a common derivation, but different computational and theoretical properties.

Plan From ERM to Tikhonov regularization. Linear ill-posed problems and stability. Spectral Regularization and Filtering. Example of Algorithms.

Basic Notation training set S = {(X 1, Y 1 ),..., (X n, Y n )}. X is the n by d input matrix. Y = (Y 1,..., Y n ) is the output vector. k denotes the kernel function, K the n by n kernel matrix with entries K ij = k(x i, X j ) and H the RKHS with kernel k. RLS estimator solves 1 min f H n n (Y i f (X i )) 2 + λ f 2 H. i=1

Representer Theorem We have seen that RKHS allow us to write the RLS estimator in the form f λ S (X) = n c i k(x, X i ) i=1 with (K + nλi)c = Y where c = (c 1,..., c n ).

Empirical risk minimization Similarly we can prove that the solution of empirical risk minimization 1 n min (Y i f (X i )) 2 f H n can be written as i=1 f S (X) = where the coefficients satisfy n c i k(x, X i ) i=1 Kc = Y.

The Role of Regularization We observed that adding a penalization term can be interpreted as way to to control smoothness and avoid overfitting 1 min f H n n i=1 (Y i f (X i )) 2 1 min f H n n (Y i f (X i )) 2 + λ f 2 H. i=1

The Role of Regularization Now we can observe that adding a penalty has an effect from a numerical point of view: Kc = Y (K + nλi)c = Y it stabilizes a possibly ill-conditioned matrix inversion problem. This is the point of view of regularization for (ill-posed) inverse problems.

Ill-posed Inverse Problems Hadamard introduced the definition of ill-posedness. Ill-posed problems are typically inverse problems. If g G and f F, with G, F Hilbert spaces, a linear, continuous operator L, consider the equation g = Lf. The direct problem is is to compute g given f ; the inverse problem is to compute f given the data g. The inverse problem of finding f is well-posed when the solution exists, is unique and is stable, that is depends continuously on the initial data g. Otherwise the problem is ill-posed.

Ill-posed Inverse Problems Hadamard introduced the definition of ill-posedness. Ill-posed problems are typically inverse problems. If g G and f F, with G, F Hilbert spaces, a linear, continuous operator L, consider the equation g = Lf. The direct problem is is to compute g given f ; the inverse problem is to compute f given the data g. The inverse problem of finding f is well-posed when the solution exists, is unique and is stable, that is depends continuously on the initial data g. Otherwise the problem is ill-posed.

Ill-posed Inverse Problems Hadamard introduced the definition of ill-posedness. Ill-posed problems are typically inverse problems. If g G and f F, with G, F Hilbert spaces, a linear, continuous operator L, consider the equation g = Lf. The direct problem is is to compute g given f ; the inverse problem is to compute f given the data g. The inverse problem of finding f is well-posed when the solution exists, is unique and is stable, that is depends continuously on the initial data g. Otherwise the problem is ill-posed.

Linear System for ERM In the finite dimensional case the main problem is numerical stability. For example, in the learning setting the kernel matrix can be decomposed as K = QΣQ t, with Σ = diag(σ 1,..., σ n ), σ 1 σ 2...σ n 0 and q 1,..., q n are the corresponding eigenvectors. Then c = K 1 Y = QΣ 1 Q t Y = n i=1 1 σ i < q i, Y > q i. In correspondence of small eigenvalues, small perturbations of the data can cause large changes in the solution. The problem is ill-conditioned.

Linear System for ERM In the finite dimensional case the main problem is numerical stability. For example, in the learning setting the kernel matrix can be decomposed as K = QΣQ t, with Σ = diag(σ 1,..., σ n ), σ 1 σ 2...σ n 0 and q 1,..., q n are the corresponding eigenvectors. Then c = K 1 Y = QΣ 1 Q t Y = n i=1 1 σ i < q i, Y > q i. In correspondence of small eigenvalues, small perturbations of the data can cause large changes in the solution. The problem is ill-conditioned.

Regularization as a Filter For Tikhonov regularization c = (K + nλi) 1 Y = Q(Σ + nλi) 1 Q t Y n 1 = σ i + nλ < q i, Y > q i. i=1 Regularization filters out the undesired components. 1 For σ >> λn, then σ i +nλ 1 σ i. 1 For σ << λn, then σ i +nλ 1 λn.

Matrix Function Note that we can look at a scalar function G λ (σ) as a function on the kernel matrix. Using the eigen-decomposition of K we can define G λ (K ) = QG λ (Σ)Q T, meaning For Tikhonov G λ (K )Y = n G λ (σ i ) < q i, Y > q i. i=1 G λ (σ) = 1 σ + nλ.

Regularization in Inverse Problems In the inverse problems literature many algorithms are known besides Tikhonov regularization. Each algorithm is defined by a suitable filter function G λ. This class of algorithms is known collectively as spectral regularization. Algorithms are not necessarily based on penalized empirical risk minimization.

Filtering, Regularizartion and Learning The idea of using regularization from inverse problems in statistics (see Wahba) and machine learning (see Poggio and Girosi) is now well known. Ideas coming from inverse problems regarded mostly the use of Tikhonov regularization. The notion of filter function was studied in machine learning and gave a connection between function approximation in signal processing and approximation theory. The work of Poggio and Girosi enlighted the relation between neural network, radial basis function and regularization. Filtering was typically used to define a penalty for Tikhonov regularization, in the following it is used to define algorithms different though similar to Tikhonov regularization.

Algorithms Besides Tikhonov regularization we consider the following algorithms: Gradient Descent or Landweber Iteration or L2 Boosting ν-method, accelerated Landweber. Iterated Tikhonov Truncated Singular Value Decomposition (TSVD) Principal Component Regression (PCR) The spectral filtering perspective leads to a unified framework.

Properties of Spectral Filters Not every scalar function defines a regularization scheme. Roughly speaking a good filter function must have the following properties: as λ goes to 0, G λ (σ) 1/σ so that G λ (K ) K 1. λ controls the magnitude of the (smaller) eigenvalues of G λ (K ).

Spectral Regularization for Learning We can define a class of Kernel Methods as follows. Spectral Regularization We look for estimators f λ S (X) = n c i k(x, X i ) i=1 where c = G λ (K )Y.

Gradient Descent Consider the (Landweber) iteration: gradient descent set c 0 = 0 for i= 1,..., t 1 c i = c i 1 + η(y Kc i 1 ) If the largest eigenvalue of K is smaller than n the above iteration converges if we choose the step-size η = 2/n. The above iteration can be seen as the minimization of the empirical risk 1 n Y Kc 2 2 via gradient descent.

Gradient Descent as Spectral Filtering Note that c 0 = 0, c 1 = ηy, c 2 = ηy + η(i ηk )Y c 3 = ηy + η(i K )ηy + η(y K (ηy + η(i ηk )Y )) = ηy + η(i K )ηy + η(i 2ηK + η 2 K 2 )Y One can prove by induction that the solution at the t th iteration is given by The filter function is t 1 c = η (I ηk ) i Y. i=0 t 1 G λ (σ) = η (I ησ) i. i=0

Gradient Descent as Spectral Filtering Note that c 0 = 0, c 1 = ηy, c 2 = ηy + η(i ηk )Y c 3 = ηy + η(i K )ηy + η(y K (ηy + η(i ηk )Y )) = ηy + η(i K )ηy + η(i 2ηK + η 2 K 2 )Y One can prove by induction that the solution at the t th iteration is given by The filter function is t 1 c = η (I ηk ) i Y. i=0 t 1 G λ (σ) = η (I ησ) i. i=0

Another Perspective on Landweber iteration Note that i 0 x i = 1/(1 x), also holds replacing x with the a matrix. If we consider the kernel matrix (or rather I ηk ) we get K 1 = η t 1 (I ηk ) i Y η (I ηk ) i Y. i=0 i=0 The filter function of Landweber iteration corresponds to a truncated power expansion of K 1.

Early Stopping The regularization parameter is the number of iteration. Roughly speaking t 1/λ. Large values of t correspond to minimization of the empirical risk and tend to overfit. Small values of t tends to oversmooth, recall we start from c = 0. Early stopping of the iteration has a regularization effect.

Gradient Descent at Work 2 1.5 1 0.5 Y 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 X

Gradient Descent at Work 2 1.5 1 0.5 Y 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 X

Gradient Descent at Work 2 1.5 1 0.5 Y 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 X

Connection to L 2 Boosting Landweber iteration (or gradient descent) has been rediscovered in statistics with name of L 2 Boosting. Boosting Then name Boosting denotes a large class of methods building estimators as linear (convex) combinations of weak learners. Many boosting algorithms can be seen as gradient descent minimization of the empirical risk on the linear span of some basis function. For Landweber iteration the weak learners are k(x i, ), i = 1,..., n.

ν-method The so called ν-method or accelerated Landweber iteration can be thought as an accelerated version of gradient descent. The filter function is G t (σ) = p t (σ) with p t a polynomial of degree t 1. The regularization parameter (think of 1/λ) is t (rather than t): fewer iterations are needed to attain a solution.

ν-method (cont.) The method is implemented by the following iteration. gradient descent set c 0 = 0 ω 1 = (4ν + 2)/(4ν + 1) c 1 = c 0 + ω 1 n (Y Kc 0 ) for i= 2,..., t 1 c i = c i 1 + u i (c i 1 c i 2 ) + ω i u i = (i 1)(2i 3)(2i+2ν 1) (i+2ν 1)(2i+4ν 1)(2i+2ν 3) ω i = 4 (2i+2ν 1)(i+ν 1) (i+2ν 1)(2i+4ν 1) n (Y Kc i 1)

Iterated Tikhonov The following method can be seen a combination of Tikhonov regularization and gradient descent. gradient descent set c 0 = 0 for i= 0,..., t 1 (K + nλi)c i = Y + nλc i 1 The filter function is: G λ (σ) = (σ + λ)t λ t σ(σ + λ) t.

Iterated Tikhonov (cont.) Both the number of iteration and λ can be seen as regularization parameters. It can be used to enforce more smoothness on the solution. Tikhonov regularization suffers from a saturation effect: it cannot exploit the regularity of the solution beyond a certain critical value.

Truncated Singular Value Decomposition This method is one of the oldest regularization techniques and is also called spectral cut-off. TSVD Given the eigen-decomposition K = QΣQ t, a regularized inverse of the kernel matrix is built discarding all the eigenvalues before the prescribed threshold λn. It is described by the filter function G λ (σ) = 1/σ if σ λ/n and 0 otherwise.

Connection to PCA Principal component Analysis is a well known dimensionality reduction technique often used as preprocessing in learning. PCA Assuming centered data, X t X is the covariance matrix and its eigenvectors (V j ) d j=1 are the principal components. PCA amounts to map each example X i in where m < min{n, d}. X i = (X t i V 1,..., X t i V m ) notation: X t i is the transpose of the first row (example) of X.

PCA (cont.) The above algorithm can be written using only the linear kernel matrix XX t and its eigenvectors (U i ) n i=1. The eigenvalues of XX t and X t X are the same and V j = 1 σi X t U j. Then X i = ( 1 σi n j=1 Uj 1 Xi t X 1 j),..., σn n j=1 U m j X t i X j). Note that X t i X j = k(x i, X j ).

Kernel PCA We can perform a non linear principal component analysis, namely KPCA, by choosing non linear kernel functions. Using K = QΣQ T we can rewrite the projection in vector notation. If we let Σ M = diag(σ 1,, σ m, 0,, 0) then the projected data matrix X is X = KQΣ 1/2 m

Principal Component Regression ERM on the projected data Y min β X 2, β R m n is equivalent to perform truncated singular values decomposition on the original problem. Representer Theorem tells us that β t Xi = n j=1 X t j X i c j with c = ( X X t ) 1 Y.

Dimensionality Reduction and Generalization Using X = KQΣ 1/2 m so that we get X X t = QΣQ t QΣ 1/2 m Σ 1/2 m Q t QΣQ t = QΣ m Q t. c = QΣ 1 m Q t Y = G λ (K )Y, where G λ is the filter function of TSVD. The two procedure are equivalent. The regularization parameter is the eigenvalue threshold in one case and the number of components kept in the other case.

Dimensionality Reduction and Generalization (cont.) Projection Regularizes! Doing KPCA and then RLS is redundant. If data are centered Spectral regularization (also Tikhonov) can see as filtered projection on the principal components.

Why Should These Methods Learn? we have seen that G λ (K) K 1 if λ 0 anyway usually, we DON T want to solve Kc = Y since it would simply correspond to an over-fitting solution stability vs generalization how can we show that stability ensures generalization?

Population Case It is useful to consider what happens if we know the true distribution. integral operator for n large enough 1 n K L kf (s) = X k(x, s)f (x)p(x)dx the ideal problem for n large enough we have Kc = Y L k f = L k f ρ where f ρ is the regression (target) function defined by f ρ (x) = Y yp(y x)dy

Regularization in the Population Case it can be shown that which is the least squares problem associated to L k f = L k f ρ. tikhonov regularization in this case is simply or equivalently f λ = (L k f + λi) 1 L k f ρ

Fourier Decomposition of the Regression Function fourier decomposition of f ρ and f λ if we diagonalize L k to get the eigensystem (t i, φ i ) i we can write f ρ = i f ρ, φ i φ i perturbations affect high order components. tikhonov regularization can be written as f λ = i t i t i + λ f ρ, φ i φ i sampling IS a perturbation stabilizing the problem with respect to random discretization (sampling) we can recover f ρ

Comments on Complexity and Parameter Choice Iterative methods perform matrix vector multiplication O(n 2 ) at each iteration and the regularization parameter is the number of iteration itself. There is not a closed form for leave one out error. Parameter tuning is different from method to method. Compared to RLS in iterative and projected methods the regularization parameter is naturally discrete. TSVD has a natural range for the search of the regularization parameter. For TSVD the regularization parameter can be interpreted in terms of dimensionality reduction.

Final remarks Many different principles lead to regularization: penalized minimization, iterative optimization, projection. The common intuition is that they enforce stability of the solution. All the methods are implicitly based on the use of square loss. For other loss function different notion of stability can be used.