Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

Similar documents
Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Computational Learning Theory

Computational Learning Theory

Computational Learning Theory

Computational Learning Theory

Machine Learning

Machine Learning. VC Dimension and Model Complexity. Eric Xing , Fall 2015

Machine Learning

Computational Learning Theory. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Computational Learning Theory (COLT)

[read Chapter 2] [suggested exercises 2.2, 2.3, 2.4, 2.6] General-to-specific ordering over hypotheses

Computational Learning Theory. CS534 - Machine Learning

Concept Learning through General-to-Specific Ordering

Computational Learning Theory

Machine Learning. Lecture 9: Learning Theory. Feng Li.

Computational Learning Theory

The PAC Learning Framework -II

Computational Learning Theory: Probably Approximately Correct (PAC) Learning. Machine Learning. Spring The slides are mainly from Vivek Srikumar

Concept Learning. Space of Versions of Concepts Learned

Concept Learning. Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University.

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity;

Concept Learning. Berlin Chen References: 1. Tom M. Mitchell, Machine Learning, Chapter 2 2. Tom M. Mitchell s teaching materials.

Generalization, Overfitting, and Model Selection

Overview. Machine Learning, Chapter 2: Concept Learning

Computational Learning Theory

Question of the Day? Machine Learning 2D1431. Training Examples for Concept Enjoy Sport. Outline. Lecture 3: Concept Learning

Lecture 25 of 42. PAC Learning, VC Dimension, and Mistake Bounds

Concept Learning Mitchell, Chapter 2. CptS 570 Machine Learning School of EECS Washington State University

Outline. [read Chapter 2] Learning from examples. General-to-specic ordering over hypotheses. Version spaces and candidate elimination.

CS340 Machine learning Lecture 4 Learning theory. Some slides are borrowed from Sebastian Thrun and Stuart Russell

1 The Probably Approximately Correct (PAC) Model

FORMULATION OF THE LEARNING PROBLEM

Learning Theory. Machine Learning B Seyoung Kim. Many of these slides are derived from Tom Mitchell, Ziv- Bar Joseph. Thanks!

PAC Learning Introduction to Machine Learning. Matt Gormley Lecture 14 March 5, 2018

On the Sample Complexity of Noise-Tolerant Learning

Generalization and Overfitting

PAC Model and Generalization Bounds

Computational Learning Theory: Shattering and VC Dimensions. Machine Learning. Spring The slides are mainly from Vivek Srikumar

COMS 4771 Introduction to Machine Learning. Nakul Verma

CS340 Machine learning Lecture 5 Learning theory cont'd. Some slides are borrowed from Stuart Russell and Thorsten Joachims

Machine Learning 2D1431. Lecture 3: Concept Learning

Learning Theory. Machine Learning CSE546 Carlos Guestrin University of Washington. November 25, Carlos Guestrin

1 A Lower Bound on Sample Complexity

Outline. Training Examples for EnjoySport. 2 lecture slides for textbook Machine Learning, c Tom M. Mitchell, McGraw Hill, 1997

Introduction to Machine Learning (67577) Lecture 3

Statistical Learning Learning From Examples

Introduction to machine learning. Concept learning. Design of a learning system. Designing a learning system

VC Dimension Review. The purpose of this document is to review VC dimension and PAC learning for infinite hypothesis spaces.

Dan Roth 461C, 3401 Walnut

IFT Lecture 7 Elements of statistical learning theory

Lecture 29: Computational Learning Theory

1 Active Learning Foundations of Machine Learning and Data Science. Lecturer: Maria-Florina Balcan Lecture 20 & 21: November 16 & 18, 2015

PAC Learning. prof. dr Arno Siebes. Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht

Classification: The PAC Learning Framework

An Introduction to Statistical Theory of Learning. Nakul Verma Janelia, HHMI

Online Learning, Mistake Bounds, Perceptron Algorithm

Introduction to Bayesian Learning. Machine Learning Fall 2018

1 Review of The Learning Setting

Midterm, Fall 2003

Computational and Statistical Learning Theory

BITS F464: MACHINE LEARNING

Fundamentals of Concept Learning

Generalization bounds

Hierarchical Concept Learning

A Tutorial on Computational Learning Theory Presented at Genetic Programming 1997 Stanford University, July 1997

TTIC An Introduction to the Theory of Machine Learning. Learning from noisy data, intro to SQ model

Understanding Machine Learning A theory Perspective

Introduction to Machine Learning

Active Learning: Disagreement Coefficient

Web-Mining Agents Computational Learning Theory

10.1 The Formal Model

1 Differential Privacy and Statistical Query Learning

Computational Learning Theory

Version Spaces.

The sample complexity of agnostic learning with deterministic labels

PAC-learning, VC Dimension and Margin-based Bounds

Statistical and Computational Learning Theory

Computational Learning Theory. Definitions

Introduction to Machine Learning

Concept Learning.

Introduction to Machine Learning

Hypothesis Testing and Computational Learning Theory. EECS 349 Machine Learning With slides from Bryan Pardo, Tom Mitchell

Confusion matrix. a = true positives b = false negatives c = false positives d = true negatives 1. F-measure combines Recall and Precision:

CS 6375: Machine Learning Computational Learning Theory

Supervised Machine Learning (Spring 2014) Homework 2, sample solutions

Answers Machine Learning Exercises 2

CSCE 478/878 Lecture 2: Concept Learning and the General-to-Specific Ordering

COMP9444: Neural Networks. Vapnik Chervonenkis Dimension, PAC Learning and Structural Risk Minimization

Learning Theory. Piyush Rai. CS5350/6350: Machine Learning. September 27, (CS5350/6350) Learning Theory September 27, / 14

Lecture 7: Passive Learning

Lecture 5: Efficient PAC Learning. 1 Consistent Learning: a Bound on Sample Complexity

EECS 349: Machine Learning

A Bound on the Label Complexity of Agnostic Active Learning

The Perceptron algorithm

Introduction to Computational Learning Theory

Learning Theory, Overfi1ng, Bias Variance Decomposi9on

Generalization theory

Introduction to Machine Learning

12.1 A Polynomial Bound on the Sample Size m for PAC Learning

Lecture 2: Foundations of Concept Learning

Transcription:

Machine Learning 10-701, Fall 2016 Computational Learning Theory Eric Xing Lecture 9, October 5, 2016 Reading: Chap. 7 T.M book Eric Xing @ CMU, 2006-2016 1

Generalizability of Learning In machine learning it's really generalization error that we care about, but most learning algorithms fit their models to the training set. Why should doing well on the training set tell us anything about generalization error? Specifically, can we relate error on training set to generalization error? Are there conditions under which we can actually prove that learning algorithms will work well? Eric Xing @ CMU, 2006-2016 2

Complexity of Learning The complexity of leaning is measured mainly along two axis: Information and computation. The Information complexity is concerned with the generalization performance of learning; How many training examples are needed? How fast do learner s estimate converge to the true population parameters? etc. The Computational complexity concerns the computation resources applied to the training data to extract from it learner s predictions. It seems that when an algorithm improves with respect to one of these measures it deteriorates with respect to the other. Eric Xing @ CMU, 2006-2016 3

What General Laws Constrain Inductive Learning? Sample Complexity How many training examples are sufficient to learn target concept? Computational Complexity Resources required to learn target concept? Want theory to relate: Training examples Quantity Quality m How presented Complexity of hypothesis/concept space H Accuracy of approx to target concept Probability of successful learning Eric Xing @ CMU, 2006-2016 4

Prototypical concept learning task Binary classification Everything we'll say here generalizes to other, including regression and multiclass classification, problems. Given: Instances X: Possible days, each described by the attributes Sky, AirTemp, Humidity, Wind, Water, Forecast Target function c: EnjoySport : X {0, 1} Hypotheses space H: Conjunctions of literals. E.g. (?, Cold, High,?,?, EnjoySport). Training examples S: iid positive and negative examples of the target function (x 1, c(x 1 )),... (x m, c(x m )) Determine: A hypothesis h in H such that h(x) is "good" w.r.t c(x) for all x in S? A hypothesis h in H such that h(x) is "good" w.r.t c(x) for all x in the true dist D? Eric Xing @ CMU, 2006-2016 5

Two Basic Competing Models PAC framework Sample labels are consistent with some h in H Learner s hypothesis required to meet absolute upper bound on its error Agnostic framework No prior restriction on the sample labels The required upper bound on the hypothesis error is only relative (to the best hypothesis in the class) Eric Xing @ CMU, 2006-2016 6

Sample Complexity How many training examples are sufficient to learn the target concept? Training scenarios: 1 If learner proposes instances, as queries to teacher Learner proposes instance x, teacher provides c(x) 2 If teacher (who knows c) provides training examples teacher provides sequence of examples of form (x, c(x)) 3 If some random process (e.g., nature) proposes instances instance x generated randomly, teacher provides c(x) Eric Xing @ CMU, 2006-2016 7

Te mp P r e s s. Sor e- Thr oat C ol o r Te mp. Pre ss. So Colo ur disea sex 35 95 Y Pale No 22 110 N Clear Yes : : : : 10 87 N Pale No Protocol Learner Given: set of examples X fixed (unknown) distribution D over X set of hypotheses H set of possible target concepts C Learner observes sample S = { x i, c(x i ) } instances x i drawn from distr. D labeled by target concept c C (Learner does NOT know c(.), D) Learner outputs h H estimating c P 3 9 a N 2 0 l e h is evaluated by performance on subsequent instances drawn from D For now: C = H (so c H) Noise-free data Classifier disease X No Eric Xing @ CMU, 2006-2016 8

True error of a hypothesis Definition: The true error (denoted D (h)) of hypothesis h with respect to target concept c and distribution D is the probability that h will misclassify an instance drawn at random according to D. Eric Xing @ CMU, 2006-2016 9

Two notions of error Training error (a.k.a., empirical risk or empirical error) of hypothesis h with respect to target concept c How often h(x) c(x) over training instance from S True error of (a.k.a., generalization error, test error) hypothesis h with respect to c How often h(x) c(x) over future random instances drew iid from D Can we bound in terms of?? Eric Xing @ CMU, 2006-2016 10

The Union Bound Lemma. (The union bound). Let A 1 ;A 2,, A k be k different events (that may not be independent). Then In probability theory, the union bound is usually stated as an axiom (and thus we won't try to prove it), but it also makes intuitive sense: The probability of any one of k events happening is at most the sums of the probabilities of the k different events. Eric Xing @ CMU, 2006-2016 11

Hoeffding inequality Lemma. (Hoeding inequality) Let Z 1,,Z m be m independent and identically distributed (iid) random variables drawn from a Bernoulli( ) distribution, i.e., P(Z i = 1) =, and P(Z i = 0) = 1-. Let be the mean of these random variables, and let any > 0 be fixed. Then This lemma (which in learning theory is also called the Chernoff bound) says that if we take the average of m Bernoulli( ) random variables to be our estimate of, then the probability of our being far from the true value is small, so long as m is large. Eric Xing @ CMU, 2006-2016 12

Version Space A hypothesis h is consistent with a set of training examples S of target concept c if and only if h(x)=c(x) for each training example x i, c(x i ) in S The version space, VS H,S, with respect to hypothesis space H and training examples S is the subset of hypotheses from H consistent with all training examples in S. Eric Xing @ CMU, 2006-2016 13

Consistent Learner A learner is consistent if it outputs hypothesis that perfectly fits the training data This is a quite reasonable learning stragety Every consistent learning outputs a hypothesis belonging to the version space We want to know how such hypothesis generalizes Eric Xing @ CMU, 2006-2016 14

Probably Approximately Correct Goal: PAC-Learner produces hypothesis ĥ that is approximately correct, err D (ĥ) 0 with high probability P( err D (ĥ) 0 ) 1 Double hedging" approximately probably Need both! Eric Xing @ CMU, 2006-2016 15

Exhausting the version space Definition: The version space VS H,S is said to be -exhausted with respect to c and S, if every hypothesis h in VS H,S has true error less than with respect to c and D. Eric Xing @ CMU, 2006-2016 16

How many examples will - exhaust the VS Theorem: [Haussler, 1988]. If the hypothesis space H is finite, and S is a sequence of m 1 independent random examples of some target concept c, then for any 0 1/2, the probability that the version space with respect to H and S is not -exhausted (with respect to c) is less than This bounds the probability that any consistent learner will output a hypothesis h with (h) Eric Xing @ CMU, 2006-2016 17

Proof Eric Xing @ CMU, 2006-2016 18

What it means [Haussler, 1988]: probability that the version space is not ε- exhausted after m training examples is at most H e - m Suppose we want this probability to be at most δ 1. How many training examples suffice? 2. If error train (h) = 0 then with probability at least (1-δ): Eric Xing @ CMU, 2006-2016 19

Learning Conjunctions of Boolean Literals How many examples are sufficient to assure with probability at least (1 - ) that every h in VS H,S satisfies D (h) Use our theorem: m 1 (ln H ln(1/ )) Suppose H contains conjunctions of constraints on up to n boolean attributes (i.e., n boolean literals). Then H = 3 n, and or m 1 n (ln 3 ln(1/ )) 1 m ( n ln3 ln( / )) 1 Eric Xing @ CMU, 2006-2016 20

PAC Learnability A learning algorithm is PAC learnable if it Requires no more than polynomial computation per training example, and no more than polynomial number of samples Theorem: conjunctions of Boolean literals is PAC learnable Eric Xing @ CMU, 2006-2016 21

How about EnjoySport? m 1 (ln H ln( 1 / )) If H is as given in EnjoySport then H = 729, and m 1 (ln729 ln(1/ )) if want to assure that with probability 95%, VS contains only hypotheses with S (h).1, then it is sufficient to have m examples, where m 1 (ln729 ln(1/.05)).1 m 10(ln729 m 10(6.59 m 95.9 ln20) 3.00) Eric Xing @ CMU, 2006-2016 22

PAC-Learning Learner L can draw labeled instance x, c(x) in unit time, x X of length n drawn from distribution D, labeled by target concept c C Def'n: Learner L PAC-learns class C using hypothesis space H if 1. for any target concept c C, any distribution D, any such that 0 < < 1/2, such that 0 < <1/2, L returns h H s.t. w/ prob. 1, err D (h) < 2. L's run-time (and hence, sample complexity) is poly( x, size(c), 1/, 1/ ) Sufficient: 1. Only poly( ) training instances H = 2 poly() 2. Only poly time / instance Often C = H m 1 (ln H ln( 1 / )) Eric Xing @ CMU, 2006-2016 23

Agnostic Learning So far, assumed c H Agnostic learning setting: don't assume c H What do we want then? The hypothesis h that makes fewest errors on training data What is sample complexity in this case? m 1 (ln H ln(1/ )) 2 derived from Hoeffding bounds: Pr[ error D 2 ( h) error ( h) ] S e 2m 2 Eric Xing @ CMU, 2006-2016 24

Empirical Risk Minimization Paradigm Choose a Hypothesis Class H of subsets of X. For an input sample S, find some h in H that fits S "well". For a new point x, predict a label according to its membership in h. Example: Consider linear classification, and let Then We think of ERM as the most "basic" learning algorithm, and it will be this algorithm that we focus on in the remaining. In our study of learning theory, it will be useful to abstract away from the specific parameterization of hypotheses and from issues such as whether we're using a linear classier or an ANN Eric Xing @ CMU, 2006-2016 25

The Case of Finite H H = {h 1,, h k } consisting of k hypotheses. We would like to give guarantees on the generalization error of ĥ. First, we will show that is a reliable estimate of ε(h) for all h. Second, we will show that this implies an upper-bound on the generalization error of ĥ. Eric Xing @ CMU, 2006-2016 26

Misclassification Probability The outcome of a binary classifier can be viewed as a Bernoulli random variable Z : For each sample: Hoeffding inequality This shows that, for our particular h i, training error will be close to generalization error with high probability, assuming m is large. Eric Xing @ CMU, 2006-2016 27

Uniform Convergence But we don't just want to guarantee that will be close (with high probability) for just only one particular h i. We want to prove that this will be true simultaneously for all h i H For k hypothesis: This means: Eric Xing @ CMU, 2006-2016 28

In the discussion above, what we did was, for particular values of m and γ, given a bound on the probability that: for some h i H There are three quantities of interest here: m and γ, and probability of error; we can bound either one in terms of the other two. Eric Xing @ CMU, 2006-2016 29

Sample Complexity How many training examples we need in order make a guarantee? We find that if then with probability at least 1-δ, we have that for all h i H The key property of the bound above is that the number of training examples needed to make this guarantee is only logarithmic in k, the number of hypotheses in H. This will be important later. Eric Xing @ CMU, 2006-2016 30

Generalization Error Bound Similarly, we can also hold m and δ fixed and solve for γ in the previous equation, and show [again, convince yourself that this is right!] that with probability 1- δ, we have that for all h i H Define to be the best possible hypothesis in H. If uniform convergence occurs, then the generalization error of is at most 2 worse than the best possible hypothesis in H! Eric Xing @ CMU, 2006-2016 31

Summary Eric Xing @ CMU, 2006-2016 32