L3 Apprentissage. Michèle Sebag Benjamin Monmège LRI LSV. 27 février 2013

Similar documents
Machine Learning. Alexandre Allauzen Michèle Sebag Thomas Schmitt LIMSI LRI. Sept. 28th, 2016

Introduction to Support Vector Machines

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Review: Support vector machines. Machine learning techniques and image analysis

Linear & nonlinear classifiers

Linear vs Non-linear classifier. CS789: Machine Learning and Neural Network. Introduction

Classifier Complexity and Support Vector Classifiers

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

SUPPORT VECTOR MACHINE

Lecture Support Vector Machine (SVM) Classifiers

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Kernel Methods and Support Vector Machines

PAC-learning, VC Dimension and Margin-based Bounds

Support Vector Machine for Classification and Regression

Statistical Machine Learning from Data

CS145: INTRODUCTION TO DATA MINING

Support Vector Machines.

Perceptron Revisited: Linear Separators. Support Vector Machines

Linear & nonlinear classifiers

Outline. Basic concepts: SVM and kernels SVM primal/dual problems. Chih-Jen Lin (National Taiwan Univ.) 1 / 22

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

Modelli Lineari (Generalizzati) e SVM

MACHINE LEARNING. Support Vector Machines. Alessandro Moschitti

Chapter 9. Support Vector Machine. Yongdai Kim Seoul National University

An introduction to Support Vector Machines

Support'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan

Support Vector Machine (continued)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Support Vector Machines and Kernel Methods

Machine Learning. Support Vector Machines. Manfred Huber

Jeff Howbert Introduction to Machine Learning Winter

CS798: Selected topics in Machine Learning

Statistical learning theory, Support vector machines, and Bioinformatics

Indirect Rule Learning: Support Vector Machines. Donglin Zeng, Department of Biostatistics, University of North Carolina

Max Margin-Classifier

Lecture 9: Large Margin Classifiers. Linear Support Vector Machines

Support Vector Machines

Applied inductive learning - Lecture 7

Support Vector Machine & Its Applications

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Introduction to Support Vector Machines

Support Vector Machines

Support Vector Machines

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Support Vector Machines

L5 Support Vector Classification

Support Vector Machines Explained

ML (cont.): SUPPORT VECTOR MACHINES

Support Vector Machines and Kernel Methods

Learning with kernels and SVM

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Statistical and Computational Learning Theory

Pattern Recognition and Machine Learning. Perceptrons and Support Vector machines

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

18.9 SUPPORT VECTOR MACHINES

Foundation of Intelligent Systems, Part I. SVM s & Kernel Methods

Support Vector Machines

Machine Learning And Applications: Supervised Learning-SVM

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear Classification and SVM. Dr. Xin Zhang

Constrained Optimization and Support Vector Machines

Support Vector Machines

ICS-E4030 Kernel Methods in Machine Learning

Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron

Pattern Recognition 2018 Support Vector Machines

Lecture 18: Kernels Risk and Loss Support Vector Regression. Aykut Erdem December 2016 Hacettepe University

Announcements - Homework

Introduction to SVM and RVM

Polyhedral Computation. Linear Classifiers & the SVM

Machine Learning

Linear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights

Support Vector Machines for Classification: A Statistical Portrait

Machine Learning. Lecture 6: Support Vector Machine. Feng Li.

Support Vector Machines

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Mehryar Mohri Foundations of Machine Learning Courant Institute of Mathematical Sciences Homework assignment 3 April 5, 2013 Due: April 19, 2013

Support vector machines Lecture 4

Dan Roth 461C, 3401 Walnut

Support Vector Machine

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

CS-E4830 Kernel Methods in Machine Learning

Computational Learning Theory

Introduction to Support Vector Machines

Lecture Notes on Support Vector Machine

CIS 520: Machine Learning Oct 09, Kernel Methods

A GENERAL FORMULATION FOR SUPPORT VECTOR MACHINES. Wei Chu, S. Sathiya Keerthi, Chong Jin Ong

Support Vector Machines. Machine Learning Fall 2017

Machine Learning

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Chapter 6: Classification

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

About this class. Maximizing the Margin. Maximum margin classifiers. Picture of large and small margin hyperplanes

A Tutorial on Support Vector Machine

COMP 875 Announcements

CSC 411 Lecture 17: Support Vector Machine

Transcription:

L3 Apprentissage Michèle Sebag Benjamin Monmège LRI LSV 27 février 2013

Next course Tutorials/Videolectures http://www.iro.umontreal.ca/ bengioy/talks/icml2012-ybtutorial.pdf Part 1: 1-56; Part 2: 79-133 Mar. 13th (oral participation) Some students present part 1-2 Other students ask questions

Hypothesis Space H / Navigation This course H navigation operators Version Space Logical spec / gen Decision Trees Logical specialisation Neural Networks Numerical gradient Support Vector Machines Numerical quadratic opt. Ensemble Methods adaptation E h : X = IR D IR Binary classification h(x) > 0 x classified as True else, classified as False

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

5 The separable case: More than one separating hyperplane

Linear Support Vector Machines Linear Separators f (x) = w, x + b Region ŷ = 1: f (x) > 0 Region ŷ = 1: f (x) < 0 Criterion i, y i ( w, x i + b) > 0 Remark Invariant by multiplication of w and b by a positive value

Canonical formulation Fix the scale: min i {y i ( w, x i + b)} = 1 i, y i ( w, x i + b) 1

Maximize the Margin Criterion Maximize the minimal distance (points, hyperplane). Obtain the largest possible band Margin w, x + + b = 1 w, x + b = 1 w, x + x = 2 Margin = projection of x + x on the normal vector of the hyperplane, w w 2 1 Maximize w minimize w 2

9 Maximize the Margin (2) Problem { Minimize 1 2 w 2 with the constraints i, y i ( w, x i + b) 1

10 Maximal Margin Hyperplane

Quadratic Optimization (reminder) Optimize f with constraints f i 0 When f and f i are convex Introduce the Lagrange multipliers α i (α i 0), Consider (penalization of the violated constraints) F (x, α) = f (x) i α i f i (x) Kuhn-Tucker principle (1951) At the optimum (x 0, α ) F (x 0, α ) = min α 0 F (x 0, α) = max x F (x, α )

12 Primal Problem L(w, b, α) = 1 2 w 2 i α i (y i ( x i, w + b) 1), α i 0 Differentiate w.r.t. b: at the optimum, Differentiate w.r.t. w : Replace in L(w, b, α): L b = 0 = α i y i L w = 0 = w α i y i x i

Dual problem (Wolfe) Maximize W (α) = i α i 1 2 with the constraint i, α i 0 i α iy i = 0 i,j α iα j y i y j < x i, x j > Quadratic form w.r.t. α Solution: αi Compute w : w = i quadratic optimization is easy α i y i x i If ( x i, w + b)y i > 1, α i = 0. IF ( x i, w + b)y i = 1, α i > 0, x i support vector Compute b : b = 1 2 ( w, x + + w, x )

14

15 Summary E = {(x i, y i )}, x i IR d, y i { 1, 1}, i = 1..n} (x i, y i ) P(x, y) Two goals h(x) = w, x + b Role Data fitting sign(y i ) = sign (h(x i )) maximize margin y i.h(x i ) achieve learning Regularization : minimize w avoid overfitting

16 Support Vector Machines General scheme Minimize the regularization term... subject to data constraints = margin 1 (*) { Min. 1 2 w 2 s.t. y i ( w, x i + b) 1 i = 1... n Constrained minimization of a convex function introduce Lagrange multipliers α i 0, i = 1... n Min L(w, b, α) = 1 2 w 2 + i α i (1 y i ( w, x i + b)) Primal problem d + 1 variables (+ n Lagrange multipliers) (*) in the separable case; see later for non-separable case

Support Vector Machines, 2 At the optimum L w = L b = L α = 0 Dual problem Max. Q(α) = i α i 1 2 s.t. i, α i 0 i α iy i = 0 i,j α iα j y i y j x i, x j Wolfe Support vectors Examples (x i, y i ) s.t. α i > 0 the only ones involved in the decision function w = α i y i x i

18 Support vectors, examples

19 Support vectors, examples MNIST data Data Support vectors Remarks Support vectors are critical examples Show that the Leave-One-Out error is less than # sv. near-miss LOO: iteratively, learn on all examples but one, and test on the remaining one

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

21 Separable vs non-separable data Training Test

Linear hypotheses, non separable data Cortes & Vapnik 95 Non-separable data not all constraints are satisfiable Formalization y i ( w, x i + b) 1 ξ i Introduce slack variables ξ i And penalize them 1 Minimize 2 w 2 + C i ξ i Subject to i, y i ( w, x i + b) 1 ξ i ξ i 0 Critical decision: adjust C = error cost.

23 Primal problem, non separable case Same resolution: Lagrange Multipliers α i and β i, with α i 0, β i 0 L(w, b, ξ, α, β) = Min 1 2 w 2 + C i ξ i i α i(y i ( w, x i + b) 1 + ξ i ) At the optimum Likewise w = i α i y i x i i β iξ i L w = L b = L = 0 ξ i α i y i = 0 C α i β i = 0 i Convex (quadratic) optimization problem it is equivalent to solve the primal and the dual problem (expressed with multipliers α, β)

24 Dual problem, non separable case Min i α i 1 α i α j y i y j x i, x j, 0 α i C 2 i,j Mathematically nice problem H = semi-definite positive n n matrix H i,j = y i y j x i, x j Dual problem quadratic form with e = (1,..., 1) IR n. Minimize α, e α T Hα

25 Support vectors Only support vectors (α i > 0) are involved in h w = α i y i x i Importance of support vector x i : weight α i Difference with the separable case 0 < α i < C bounded influence of examples

The loss (error cost) function Roles The goal is data fitting loss function characterizes the learning goal while solving a convex optimization problem and makes it tractable/reproducible The error cost Binary cost: l(y, h(x)) = 1 iff y h(x) Quadratic cost: l(y, h(x)) = (y h(x)) 2 Hinge loss l(y, h(x)) = max(0, 1 y.h(x)) = (1 y.h(x)) + = ξ Hinge loss Square hinge loss 0 1 loss y. f(x) 26 0 1

Complexity Learning complexity Worst case: O(n 3 ) Empirical complexity: depends on C O(n 2 n sv ) where n sv is the number of s.v. Usage complexity O(n sv )

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

Non-separable data Representation change x IR 2 polar coordinates IR 2

Principle Intuition Φ : X Φ(X ) IR D In a high-dimensional space, every dataset is linearly separable Map data onto Φ(X ), and we are back to linear separation Glossary X : input space Φ(X ): feature space

The kernel trick Remark Generalization bounds do not depend on the dimension of input space X but on the capacity of the hypothesis space H. SVMs only involve scalar products x i, x j. Intuition Representation change is only virtual Φ : X Φ(X ) Consider scalar product in Φ(X )... and compute it in X K(x i, x j ) = Φ(x i ), Φ(x j )

2 Example: polynomial kernel Principle x IR 3 Φ(x) IR 10 x = (x 1, x 2, x 3 ) Φ(x) = (1, 2x 1, 2x 2, 2x 3, 2x 1 x 2, 2x 1 x 3, 2x 2 x 3, x1 2, x 2 2, x 3 2) Why 2?

2 Example: polynomial kernel Principle x IR 3 Φ(x) IR 10 x = (x 1, x 2, x 3 ) Φ(x) = (1, 2x 1, 2x 2, 2x 3, 2x 1 x 2, 2x 1 x 3, 2x 2 x 3, x1 2, x 2 2, x 3 2) Why 2? because Φ(x), Φ(x ) = (1 + x, x ) 2 = K(x, x )

33 Primal and dual problems unchanged Primal problem { Min. 1 2 w 2 s.t. y i ( w, Φ(x i ) + b) 1 i = 1... n Dual problem Max. Q(α) = i α i 1 2 s.t. i, α i 0 i α iy i = 0 i,j α iα j y i y j K(x i, x j ) Hypothesis h(x) = i α i y i K(x i, x)

34 Example, polynomial kernel K(x, x ) = (a x, x + 1) b Choice of a, b : cross validation Domination of high/low degree terms? Importance of normalization

35 Example, Radius-Based Function kernel (RBF) K(x, x ) = exp ( γ x x 2) No closed form Φ Φ(X ) of infinite dimension For x in IR Φ(x) = exp ( γx 2 ) ) [ ] 2γ (2γ) 1, 1! x, 2 (2γ) x 2 3, x 3,... 2! 3! Choice of γ? (intuition: think of H, H i,j = y i y j K(x i, x j ))

String kernels Notations s a string on alphabet Σ Watkins 99, Lodhi 02 i = (i 1, i 2,..., i n ) an ordered index sequence (i j < i j+1 ), avec l(i) = i n i 1 + 1 s[i] substring of s, extraction pattern is i s = BICYCLE, i= (1, 3, 6), s[i] = BCL Definition K n (s, s ) = u Σ n with 0 < ε < 1 (discount) is.t.s[i]=u js.t.s [j]=u ε l(i)+l(j)

37 String kernels, followed Φ: projection on IR D où D = Σ n Prefer the normalized version CH CA CT AT CHAT ε 2 ε 3 ε 4 ε 2 CARTOON 0 ε 2 ε 4 ε 3 K(CHAT, CARTON) = 2ε 5 + ε 8 κ ( s, s ) = K(s, s ) K(s, s)k(s s )

String kernels, followed Application 1 Document mining Pre-processing matters a lot (stop-words, stemming) Multi-lingual aspects Document classification Information retrieval Application 2, Bio-informatics Pre-processing matters a lot Classification (secondary structures) 38 Extension to graph kernels http://videolectures.net/gbr07 vert ckac/

39 Application to musical analysis Input: Midi files Pre-processing, rythm detection Representation: the musical worm (tempo, loudness) Output: Identification of performer styles Using String Kernels to Identify Famous Performers from their Playing Style, Saunders et al., 2004

Kernels: key features Absolute Relative representation x, x angle of x and x More generally K(x, x ) measures the (non-linear) similarity of x and x x is described by its similarity to other examples Necessary condition: the Mercer condition K must be positive semi-definite g L 2, K(x, x )g(x)g(x )dx 0

Why? Related to Φ Mercer condition holds φ 1, φ 2,.. k(x, x ) = λ i φ i (x)φ i (x ) i=1 with φ i eigen functions, λ i > 0 eigen values Kernel properties: let K, K be p.d. kernels and α > 0, then αk is a p.d. kernel K + K is a p.d. kernel K.K is a p.d. kernel K(x, x ) = limit p K p (x, x ) is p.d. if it exists K(A, B) = x A,x B K(x, x ) is a p.d. kernel

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

43 Multi-class discrimination Input Binary case E = {(x i, y i )}, x i IR d, y i { 1, 1}, i = 1..n} (x i, y i ) P(x, y) Multi-class case E = {(x i, y i )}, x i IR d, y i {1... k}, i = 1..n} (x i, y i ) P(x, y) Output : ĥ : IR d {1... k}.

Multi-class learning: one against all First option: k binary learning problems Pb 1: class 1 +1, classes 2... k 1 h 1 Pb 2: class 2 +1, classes 1, 3,... k 1 h 2... Prediction h(x) = i iff h i (x) = argmax{h j (x), j = 1... k} Justification If x belongs to class 1, one should have h 1 (x) 1, h j (x) < 1, j 1

45 Where is the difficulty? o o o o o o o o What we get (one vs all) What we want

46 Multi-class learning: one vs one Second option: k(k 1) 2 binary classification problems Pb i, j class i +1, class j 1 h i,j Prediction Compute all h i,j (x) Count the votes NB: One can also use the h i,j (x) values.

Multi-class learning: additionnal constraints Another option Vapnik 98; Weston, Watkins 99 Hum! 1 Minimise k 2 j=1 w j 2 + C n k i=1 l=1,l y i ξ i,l Subject to i, l y i, ( w yi, x i + b yi ) ( w l, x i + b l ) + 2 ξ i,l ξ i,l 0 n k constraints: n k dual variables

Recommendations In practice Results are in general (but not always!) similar 1-vs-1 is the fastest option

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

50 Regression Input E = {(x i, y i )}, x i IR d, y i IR, i = 1..n} (x i, y i ) P(x, y) Output : ĥ : IRd IR.

51 Regression with Support Vector Machines Intuition Find h deviating by at most ε from the data... while being as flat as possible loss function regularization Formulation 1 Min. 2 w 2 s.t. i = 1... n ( w, x i + b) y i ε ( w, x i + b) y i + ε

52 Regression with Support Vector Machines, followed Using slack variables 1 Min. 2 w 2 + C i (ξ+ i + ξ i ) s.t. i = 1... n ( w, x i + b) y i ε ξ i ( w, x i + b) y i + ε +ξ + i

Regression with Support Vector Machines, followed Primal problem L(w, b, ξ, α, β) = Min 1 2 w 2 + C i (ξ+ i + ξ i ) i α+ i (y i + ε + ξ + i w, x i + b) i α i ( w, x i + b y i + ε + ξ i ) i β+ i ξ + i i β i ξ i Dual problem Q(α +, α ) = i y i(α + i α i ) ε i (α+ i + α i ) + i,j (α+ i α i )(α + j α j ) x i, x j s.t. i = 1... n (α + i α i ) = 0 0 α + i C 0 α i C

54 Regression with Support Vector Machines, followed Hypothesis h(x) = (α + i α i ) x i, x + b With no loss of generality you can replace everywhere x, x K(x, x )

Beware High-dimensional regression E = {(x i, y i )}, x i IR D, y i IR, i = 1..n} (x i, y i ) P(x, y) A very slippery game if D >> n curse of dimensionality Dimensionality reduction mandatory Map x onto IR d Central subspace: π : X S IR d with S minimal such that y and x are independent conditionally to π(x). Find h, w : y = h(w, x)

56 Sliced Inverse Regression Bernard-Michel et al, 09 More: http://mistis.inrialpes.fr/learninria/ S. Girard

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

58 Novelty Detection Input Context E = {(x i )}, x i X, i = 1..n} (x i ) P(x) Information retrieval Identification of the data support estimation of distribution Critical issue Classification approaches not efficient: too much noise

59 One-class SVM Formulation 1 Min. 2 w 2 ρ +C i ξ i s.t. i = 1... n w, x i ρ ξ i Dual problem Min. i,j α iα j x i, x j s.t. i = 1... n 0 α i C i α i = 0

60 Implicit surface modelling Schoelkopf et al, 04 Goal: find the surface formed by the data points w, x i ρ becomes ε ( w, x i ρ) ε

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

62 Normalisation / Scaling Needed to prevent attributes to steal the game Height Gender Class x 1 150 F 1 x 2 180 M 0 x 3 185 M 0 Normalization Height Height 150 180 150

Beware Usual practice Normalize the whole dataset Learn on the training set Test on the test set

Beware Usual practice Normalize the whole dataset Learn on the training set Test on the test set NO! Good practice Normalize the training set (Scale train ) Learn from the normalized training set Scale the test set according to Scale train and test

Imbalanced datasets Typically Normal transactions: 99.99% Fraudulous transactions: not many Practice Define asymmetrical penalizations std penalization asymmetrical penalizations C i ξ i C + i,y i =1 ξ i + C i,y i = 1 ξ i Other options?

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

Data sampling Simple approaches Uniform sampling Stratified sampling often efficient same distribution as in E Incremental approaches Syed et al. 99 Partition E E 1,... E N Learn from E 1 support vectors SV 1 Learn from E 2 SV 1 support vectors SV 2 etc.

Data sampling, followed Select examples Bakir 2005 Use k-nearest neighbors Train SVM on k-means (prototypes) Pb about distances Hierarchical methods Yu 2003 Use unsupervised learning and form clusters Unsupervised learning, J. Gama Learn a hypothesis on each cluster Aggregate hypotheses

68 Reduce number of variables Select candidate s.v. F E w = α i y i x i with (x i, y i ) F Optimize α i on E 1 Min. 2 i,j, F α iα j y i y j x i, x j + C n l=1 ξ l t.q. l = 1... n, ( w, x l + b) 1 ξ l ξ l 0

Sources Vapnik, The nature of statistical learning, Springer Verlag 1995; Statistical Learning Theory, Wiley 1998 Cristianini & Shawe Taylor, An introduction to Support Vector Machines, Cambridge University Press, 2000. http://www.kernel-machines.org/tutorials Videolectures + ML Summer Schools Large scale Machine Learning challenge, ICML 2008 wshop: http://largescale.ml.tu-berlin.de/workshop/

Overview Linear SVM, separable case Linear SVM, non separable case The kernel trick The Kernel principle Examples Discussion Extensions Multi-class discrimination Regression Novelty detection On the practitioner side Improve precision Reduce computational cost Theory

71 Reminder Input Vapnik, 1995, 1998 E = {(x i, y i )}, x i IR m, y i { 1, 1}, i = 1..n} (x i, y i ) P(x, y) Output : ĥ : IR m { 1, 1} ou IR. ĥ approximates y Criterion : ideally, minimize the generalization error Err(h) = l(y, ĥ(x))dp(x, y) l = loss function: 1 y ĥ(x), (y ĥ(x))2 P(x, y) = joint distribution of the data.

72 The Bias-Variance Tradeoff Choice of a model: The space H where we are looking for ĥ. Bias: Distance between y and h = argmin{err(h), h H}. the best we can hope for Variance: Distance between ĥ and h between the best h and the ĥ we actually learn Note : Only the empirical risk (on the available data) is given Err emp,n (ĥ) = 1 n n l(y i, ĥ(x i)) i=1 Principle: Err(ĥ) < Erremp,n(ĥ) + B(n, H) If H is reasonable, Err emp,n Err when n

73 Statistical Learning Statistical Learning Theory Learning from a statistical perspective. Goal of the theory Model a real / artificial phenomenon, in order to: * understand * predict * exploit in general

General A theory: hypotheses predictions Hypotheses on the phenomenon Predictions about its behavior here, Learning errors Theory algorithm Optimize the quantities allowing prediction Nothing practical like a good theory! Vapnik

General A theory: hypotheses predictions Hypotheses on the phenomenon Predictions about its behavior here, Learning errors Theory algorithm Optimize the quantities allowing prediction Nothing practical like a good theory! Vapnik Strength/Weaknesses + Stronger Hypotheses more precise predictions BUT if the hypotheses are wrong, nothing will work

What Theory do we need? Approach in expectation A set of data x + : average of positive examples x : average of negative examples h(x) = +1 iff d(x, x + ) < d(x, x ) one example breast cancer Estimate the generalization error Data Training set, test set Learn x + et x on the training set, measure the errors on the test set

Classical Statistics vs Statistical Learning Classical Statistics Mean error We want guarantees PAC Model Probably Approximately Correct What is the probability that the error is greater than a given threshold?

Example Assume Err(h) > ε What is the probability that Err emp,n (h) = 0? Pr(Err emp,n (h) = 0, Err(h) > ε) = (1 Err(h)) n < (1 ε) n < exp( εn)

Example Assume Err(h) > ε What is the probability that Err emp,n (h) = 0? Pr(Err emp,n (h) = 0, Err(h) > ε) = (1 Err(h)) n < (1 ε) n < exp( εn) Hence, in order to guarantee a risk δ Pr(Err emp,n (h) = 0, Err(h) > ε) < δ

Example Assume Err(h) > ε What is the probability that Err emp,n (h) = 0? Pr(Err emp,n (h) = 0, Err(h) > ε) = (1 Err(h)) n < (1 ε) n < exp( εn) Hence, in order to guarantee a risk δ Pr(Err emp,n (h) = 0, Err(h) > ε) < δ The error should not be greater than ε < 1 n ln 1 δ

Statistical Learning Principle Find a bound on the generalization error Minimize the bound. Note ĥ should be considered as a random variable, depending on the training set E and the number of examples n. ĥ n Results deviation of the empirical error bias-variance Err(ĥn) Err emp,n (ĥn) + B 1 (n, H) Err(ĥn) Err(h ) + B 2 (n, H)

Approaches Minimization of the empirical risk Model selection: Choose hypothesis space H Choose ĥn = argmin{err n (h), h H} beware of overfitting Minimization of the structual risk Given H 1 H 2... H k, Find ĥn = argmin{err n (h) + pen(n, k), h H k } Regularization Which penalization? Find ĥn = argmin{err n (h) + λ h, h H} λ is identified by cross-validation

80 Structural Risk Minimization

81 Tool 1. Hoeffding bound Hoeffing 1963 Let X 1..., X n be independent random variables, and assume X i takes values in [a i, b i ] Let X = (X 1 + + X n )/n be their empirical mean. Theorem ( 2ε 2 n 2 ) Pr( X E[X ] ε) 2 exp n i=1 (b i a i ) 2 where E[X ] is the expectation of X.

82 Hoeffding Bound (2) Application: if Pr( Err(g) Err n (g) > ε) < 2e 2nε2 then with probability at least 1 δ log 2/δ Err(g) Err n (g) + 2n Uniform deviations but this does not say anything about ĥ n... Err(ĥn) Err n (ĥn) sup h H Err(h) Err n (h) if H is finite, consider the sum of Err(h) Err n (h) sif H is infinite, consider its trace on the data

Statistical Learning. Definitions Trace of H on {x 1,... x n } Vapnik 92, 95, 98 Growth Function Tr x1,..x n (H) = {(h(x 1 ),..h(x n )), h H} S(H, n) = sup (x1,..x n) Tr x1,..x n (H)

Statistical Learning. Definitions (2) Capacity of an hypothesis space H If the training set is of size n, and some function of H can have any behavior on n examples, nothing can be said! H shatters (x 1,... x n ) iff (y 1,... y n ) {1, 1} n, h H s.t. i = 1... n, h(x i ) = y i Vapnik Cervonenkis Dimension VC(H) = max {n; (x 1,... x n ) shattered by H} VC(H) = max{n / S(H, n) = 2 n }

85 A shattered set 3 points in IR 2 H= lines of the plane

86 Growth Function of linear functions over IR 20 S(H, n) 1 2 n vs n THe growth function is exponental w.r.t. n for n < d = VC(H), then polynomial (in n d ).

87 Theorem, separable case δ > 0, with probability at least 1 δ log(s(h, 2n)) + log(2/δ) Err(h) Err n (h) + 2 n Idea 1: Double sample trick Consider a second sample E Pr(sup h (Err(h) Err n (h)) ε) where Err n(h) is the empirical error on E. 2Pr(sup h (Err n(h) Err n (h)) ε/2)

88 Double sample trick There exists h s.t. A: Err E (h) = 0 B: Err(h) ε C: Err E ε 2 P(A(h)&C(h)) P(A(h)&B(h)&C(h)) = P(A(h)&B(h)).P(C(h) A(h)&B(h)) 1 2 P(A(h)&B(h))

89 Tool 2. Sauer Lemma Sauer Lemma If d = VC(H) S(H, n) = d i=1 ( n i For n > d, ( en ) d S(H, n) d Idea 2: Symmetrization Count the permutations that swap E et E. ) Summary d log n Err(h) Err n (h) + O( n )