Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9

Size: px
Start display at page:

Download "Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9"

Transcription

1 Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 9 Slides adapted from Jordan Boyd-Graber Machine Learning: Chenhao Tan Boulder 1 of 39

2 Recap Supervised learning Previously: KNN, naïve Bayes, logistic regression and neural networks Machine Learning: Chenhao Tan Boulder 2 of 39

3 Recap Supervised learning Previously: KNN, naïve Bayes, logistic regression and neural networks This time: SVMs Motivated by geometric properties (another) example of linear classifier Good theoretical guarantee Machine Learning: Chenhao Tan Boulder 2 of 39

4 Take a step back Problem Model Algorithm/optimization Evaluation Machine Learning: Chenhao Tan Boulder 3 of 39

5 Take a step back Problem Model Algorithm/optimization Evaluation /Giving-you-more-characters-to-express-yourself.html Machine Learning: Chenhao Tan Boulder 3 of 39

6 Overview Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 4 of 39

7 Intuition Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 5 of 39

8 Intuition Thinking Geometrically Suppose you have two classes: vacations and sports Suppose you have four documents Sports Doc 1 : {ball, ball, ball, travel} Doc 2 : {ball, ball} What does this look like in vector space? Vacations Doc 3 : {travel, ball, travel} Doc 4 : {travel} Machine Learning: Chenhao Tan Boulder 6 of 39

9 Intuition Put the documents in vector space Travel 3 Sports Doc 1 : {ball, ball, ball, travel} Doc 2 : {ball, ball} Vacations Doc 3 : {travel, ball, travel} Doc 4 : {travel} Ball Machine Learning: Chenhao Tan Boulder 7 of 39

10 Intuition Vector space representation of documents Each document is a vector, one component for each term. Terms are axes. High dimensionality: 10,000s of dimensions and more How can we do classification in this space? Machine Learning: Chenhao Tan Boulder 8 of 39

11 Intuition Vector space classification As before, the training set is a set of documents, each labeled with its class. In vector space classification, this set corresponds to a labeled set of points or vectors in the vector space. Premise 1: Documents in the same class form a contiguous region. Premise 2: Documents from different classes don t overlap. We define lines, surfaces, hypersurfaces to divide regions. Machine Learning: Chenhao Tan Boulder 9 of 39

12 Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya x x x x Should the document be assigned to China, UK or Kenya? Machine Learning: Chenhao Tan Boulder 10 of 39

13 Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Should the document be Should assigned the document to China, be UKassigned or Kenya? to China, UK or Kenya? x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

14 Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Find separators between the classes x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

15 Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Find separators between the classes x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

16 Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes Based on these separators: should be assigned to China x x x x Machine Learning: Chenhao Tan Boulder 10 of 39

17 Intuition Recap Vector space classification Linear classifiers Support Vector Machines Discussion Classes in the vector space Classes in the vector space UK China Kenya Find separators between the classes How do we find separators that do a good job at classifying new documents like? Main topic of today Machine Learning: Chenhao Tan Boulder 10 of 39 x x x x

18 Linear Classifiers Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 11 of 39

19 Linear Classifiers Linear classifiers Definition: A linear classifier computes a linear combination or weighted sum j β jx j of the feature values. Classification decision: j β jx j > β 0? (β 0 is our bias)... where β 0 (the threshold) is a parameter. We call this the separator or decision boundary. We find the separator based on training set. Methods for finding separator: logistic regression, naïve Bayes, linear SVM Assumption: The classes are linearly separable. Machine Learning: Chenhao Tan Boulder 12 of 39

20 Linear Classifiers Linear classifiers Definition: A linear classifier computes a linear combination or weighted sum j β jx j of the feature values. Classification decision: j β jx j > β 0? (β 0 is our bias)... where β 0 (the threshold) is a parameter. We call this the separator or decision boundary. We find the separator based on training set. Methods for finding separator: logistic regression, naïve Bayes, linear SVM Assumption: The classes are linearly separable. Before, we just talked about equations. What s the geometric intuition? Machine Learning: Chenhao Tan Boulder 12 of 39

21 Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 Schütze: Support vector machines 15 / 55 Machine Learning: Chenhao Tan Boulder 13 of 39

22 Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 x = β 0 /β 1 Schütze: Support vector machines 15 / 55 Machine Learning: Chenhao Tan Boulder 13 of 39

23 Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 Points (d 1 )withw 1 d 1 θ are in the class c. A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 x = β 0 /β 1 Points (x 1 ) with β 1 x 1 β 0 are in the class c. Schütze: Support vector machines 15 / 55 Machine Learning: Chenhao Tan Boulder 13 of 39

24 Linear Classifiers Alinearclassifierin1D A linear classifier in 1D Alinearclassifierin1Dis apointx described by the equation w 1 d 1 = θ x = θ/w 1 Points (d 1 )withw 1 d 1 θ are in the class c. Points (d 1 )withw 1 d 1 < θ are in the x complement = β 0 /β 1 class c. Schütze: Support vector machines 15 / 55 A linear classifier in 1D is a point x described by the equation β 1 x 1 = β 0 Points (x 1 ) with β 1 x 1 β 0 are in the class c. Points (x 1 ) with β 1 x 1 < β 0 are in the complement class c. Machine Learning: Chenhao Tan Boulder 13 of 39

25 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

26 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Example for a 2D linear classifier Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

27 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear classifier in 2D Alinearclassifierin2D A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ Example for a 2D linear classifier Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 θ are in the class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 β 0 are in the class c. Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

28 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin2D Alinearclassifierin2Dis alinedescribedbythe equation w 1 d 1 + w 2 d 2 = θ A linear classifier in 2D is a line described by the equation β 1 x 1 + β 2 x 2 = β 0 Example for a 2D linear classifier Example for a 2D linear classifier Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 θ are in the class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 β 0 are in the class c. Points (d 1 d 2 )with w 1 d 1 + w 2 d 2 < θ are in the complement class c. Points (x 1 x 2 ) with β 1 x 1 + β 2 x 2 < β 0 are in the complement class c. Schütze: Support vector machines 16 / 55 Machine Learning: Chenhao Tan Boulder 14 of 39

29 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear Alinearclassifierin3D classifier in 3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Schütze: Support vector machines 17 / 55 Machine Learning: Chenhao Tan Boulder 15 of 39

30 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A linear Alinearclassifierin3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Schütze: Support vector machines 17 / 55 Machine Learning: Chenhao Tan Boulder 15 of 39

31 Linear Classifiers Recap Vector space classification Linear classifiers A linear classifier in 3D Support Vector Machines Discussion Alinearclassifierin3D A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 β 0 are in the class c. Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 θ are in the class c. Schütze: Support vector machines 17 / 55 Machine Learning: Chenhao Tan Boulder 15 of 39

32 Linear Classifiers Recap Vector space classification Linear classifiers Support Vector Machines Discussion A Alinearclassifierin3D Alinearclassifierin3Dis aplanedescribedbythe equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 θ are in the class c. Points (d 1 d 2 d 3 )with w 1 d 1 + w 2 d 2 + w 3 d 3 < θ are in the complement class c. Schütze: Support vector machines 17 / 55 A linear classifier in 3D is a plane described by the equation β 1 x 1 + β 2 x 2 + β 3 x 3 = β 0 Example for a 3D linear classifier Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 β 0 are in the class c. Points (x 1 x 2 x 3 ) with β 1 x 1 + β 2 x 2 + β 3 x 3 < β 0 are in the complement class c. Machine Learning: Chenhao Tan Boulder 15 of 39

33 Linear Classifiers Naive Bayes and Logistic Regression as linear classifiers Takeaway Naïve Bayes, logistic regression and SVM are all linear methods. They choose their hyperplanes based on different objectives: joint likelihood (NB), conditional likelihood (LR), and the margin (SVM). Machine Learning: Chenhao Tan Boulder 16 of 39

34 Linear Classifiers Which hyperplane? For linearly separable training sets: there are infinitely many separating hyperplanes. They all separate the training set perfectly but they behave differently on test data. Error rates on new data are low for some, high for others. How do we find a low-error separator? Machine Learning: Chenhao Tan Boulder 17 of 39

35 Support Vector Machines Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 18 of 39

36 Support Vector Machines Support vector machines Machine-learning research in the last two decades has improved classifier effectiveness. New generation of state-of-the-art classifiers: support vector machines (SVMs), boosted decision trees, regularized logistic regression, neural networks, and random forests Applications to IR problems, particularly text classification SVMs: A kind of large-margin classifier Vector space based machine-learning method aiming to find a decision boundary between two classes that is maximally far from any point in the training data (possibly discounting some points as outliers or noise) Machine Learning: Chenhao Tan Boulder 19 of 39

37 Support Vector Machines Support Vector Machines Support Vector Machines 2-class training data 2-class training data Schütze: Support vector machines 29 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

38 Support Vector Machines Support Vector Machines Support Vector Machines 2-class training data 2-class training data decision boundary linear separator decision boundary linear separator Schütze: Support vector machines 29 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

39 Support Vector Machines Support Vector Machines Support Vector Machines 2-class training data decision boundary linear separator 2-class training data decision boundary criterion: linear being separator maximally far away criterion: beingfrom maximally any data farpoint away from any data point determines determines classifier margin classifier margin Margin is maximized Schütze: Support vector machines 29 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

40 Support Vector Machines Support Vector MachinesWhy maximize the margin? 2-class training data decision boundary linear separator criterion: being maximally decisions. far away from any data point determines classifier margin linear separator positiondoc. defined variation by support vectors Recap Vector space classification Linear classifiers Support Vector Machines Discussion Points near decision surface uncertain classification decisions (50% either way). Aclassifierwithalarge margin makes no low certainty classification Gives classification safety margin w.r.t slight errors in measurement or Maximum margin decision hyperplane Support vectors Margin is maximized Schütze: Support vector machines 30 / 55 Machine Learning: Chenhao Tan Boulder 20 of 39

41 Support Vector Machines Why maximize the margin? Why maximize the margin? Points near decision surface uncertain classification decisions A classifier with a large margin is always confident Gives classification safety margin (measurement or variation) Recap Vector space classification Linear classifiers Support Vector Machines Discussion Points near decision surface uncertain classification decisions (50% either way). Aclassifierwithalarge margin makes no low certainty classification decisions. Gives classification safety margin w.r.t slight errors in measurement or doc. variation Maximum margin decision hyperplane Support vectors Margin is maximized Schütze: Support vector machines 30 / 55 Machine Learning: Chenhao Tan Boulder 21 of 39

42 Support Vector Machines Why maximize the margin? SVM classifier: large margin around decision boundary compare to decision hyperplane: place fat separator between classes unique solution decreased memory capacity increased ability to correctly generalize to test data Machine Learning: Chenhao Tan Boulder 22 of 39

43 Formulation Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 23 of 39

44 Formulation Equation Equation of a hyperplane w x + b = 0 (1) Distance of a point to hyperplane w x + b w (2) The margin ρ is given by w x + b ρ min 1 (x,y) S w w (3) Machine Learning: Chenhao Tan Boulder 24 of 39

45 Formulation Equation Equation of a hyperplane w x + b = 0 (1) Distance of a point to hyperplane w x + b w (2) The margin ρ is given by w x + b ρ min 1 (x,y) S w w (3) This is because for any point on the marginal hyperplane, w x + b = ±1, and we would like to maximize the margin ρ Machine Learning: Chenhao Tan Boulder 24 of 39

46 Formulation Optimization Problem We want to find a weight vector w and bias b that optimize min w,b subject to y i (w x i + b) 1, i [1, m]. 1 2 w 2 (4) Machine Learning: Chenhao Tan Boulder 25 of 39

47 Formulation Optimization Problem We want to find a weight vector w and bias b that optimize min w,b subject to y i (w x i + b) 1, i [1, m]. 1 2 w 2 (4) Next week: algorithm Machine Learning: Chenhao Tan Boulder 25 of 39

48 Formulation Find the maximum margin hyperplane Machine Learning: Chenhao Tan Boulder 26 of 39

49 Formulation Find the maximum margin hyperplane 3 2 Which are the support vectors? Machine Learning: Chenhao Tan Boulder 26 of 39

50 Formulation Walkthrough example: building an SVM over the data shown Working geometrically: Machine Learning: Chenhao Tan Boulder 27 of 39

51 Formulation Walkthrough example: building an SVM over the data shown Working geometrically: If you got 0 =.5x + y 2.75, close! Machine Learning: Chenhao Tan Boulder 27 of 39

52 Formulation Walkthrough example: building an SVM over the data shown Working geometrically: If you got 0 =.5x + y 2.75, close! Set up system of equations w 1 + w 2 + b = 1 (5) 3 2 w 1 + 2w 2 + b = 0 (6) 2w 1 + 3w 2 + b = +1 (7) Machine Learning: Chenhao Tan Boulder 27 of 39

53 Formulation Walkthrough example: building an SVM over the data shown Working geometrically: If you got 0 =.5x + y 2.75, close! Set up system of equations w 1 + w 2 + b = 1 (5) 3 2 w 1 + 2w 2 + b = 0 (6) 2w 1 + 3w 2 + b = +1 (7) The SVM decision boundary is: 0 = 2 5 x y Machine Learning: Chenhao Tan Boulder 27 of 39

54 Formulation Cannonical Form w 1 x 1 + w 2 x 2 + b Machine Learning: Chenhao Tan Boulder 28 of 39

55 Formulation Cannonical Form x 1 +.8x Machine Learning: Chenhao Tan Boulder 28 of 39

56 Formulation Cannonical Form x 1 +.8x = = = Machine Learning: Chenhao Tan Boulder 28 of 39

57 Formulation What s the margin? Distance to closest point Machine Learning: Chenhao Tan Boulder 29 of 39

58 Formulation What s the margin? Distance to closest point (3 ) (2 1) 2 = 5 2 (8) Machine Learning: Chenhao Tan Boulder 29 of 39

59 Formulation What s the margin? Distance to closest point (3 ) (2 1) 2 = 5 2 (8) Weight vector Machine Learning: Chenhao Tan Boulder 29 of 39

60 Formulation What s the margin? Distance to closest point (3 ) (2 1) 2 = 5 2 (8) Weight vector 1 w = 1 ( 2 2 ( 5) ) = = 5 5 = (9) Machine Learning: Chenhao Tan Boulder 29 of 39

61 Theoretical Guarantees Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 30 of 39

62 Theoretical Guarantees Three Proofs that Suggest SVMs will Work Leave-one-out error Margin analysis VC Dimension (omitted for now) Machine Learning: Chenhao Tan Boulder 31 of 39

63 Theoretical Guarantees Leave One Out Error (sketch) Leave one out error is the error by using one point as your test set (averaged over all such points). ˆR LOO = 1 m 1 [ ] h m s {xi } y i (10) i=1 Machine Learning: Chenhao Tan Boulder 32 of 39

64 Theoretical Guarantees Leave One Out Error (sketch) Leave one out error is the error by using one point as your test set (averaged over all such points). ˆR LOO = 1 m 1 [ ] h m s {xi } y i (10) i=1 This serves as an unbiased estimate of generalization error for samples of size m 1: Machine Learning: Chenhao Tan Boulder 32 of 39

65 Theoretical Guarantees Leave One Out Error (sketch) Leave-one-out error is bounded by the number of support vectors. [ ] NSV (S) E S D m 1 [R(h s )] E S D m m (11) Consider the held out error for x i. If x i was not a support vector, the answer doesn t change. If x i was a support vector, it could change the answer; this is when we can have an error. There are N SV support vectors and thus N SV possible errors. Machine Learning: Chenhao Tan Boulder 33 of 39

66 Theoretical Guarantees Margin Theory Machine Learning: Chenhao Tan Boulder 34 of 39

67 Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) Machine Learning: Chenhao Tan Boulder 35 of 39

68 Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) (13) i=1 Machine Learning: Chenhao Tan Boulder 35 of 39

69 Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) 1 m i=1 m 1 [y i h(x i ) ρ] (13) i=1 Machine Learning: Chenhao Tan Boulder 35 of 39

70 Theoretical Guarantees Margin Loss Function To see where SVMs really shine, consider the margin loss ρ: 0 if ρ x Φ ρ (x) = 1 x ρ if 0 x ρ 1 if x 0 (12) The empirical margin loss of a hypothesis h is ˆR ρ (h) = 1 m m Φ ρ (y i h(x i )) 1 m i=1 m 1 [y i h(x i ) ρ] (13) i=1 The fraction of the points in the training sample S that have been misclassified or classified with confidence less than ρ. Machine Learning: Chenhao Tan Boulder 35 of 39

71 Theoretical Guarantees Generalization For linear classifiers H = {x w x : w Λ} and data X {x : x r}. Fix ρ > 0 then with probability at least 1 δ, for any h H, r R(h) ˆR ρ (h) Λ 2 ρ 2 m + log 1 δ (14) 2m Machine Learning: Chenhao Tan Boulder 36 of 39

72 Theoretical Guarantees Generalization For linear classifiers H = {x w x : w Λ} and data X {x : x r}. Fix ρ > 0 then with probability at least 1 δ, for any h H, r R(h) ˆR ρ (h) Λ 2 ρ 2 m + log 1 δ (14) 2m Data-dependent: must be separable with a margin Fortunately, many data do have good margin properties SVMs can find good classifiers in those instances Machine Learning: Chenhao Tan Boulder 36 of 39

73 Recap Outline Intuition Linear Classifiers Support Vector Machines Formulation Theoretical Guarantees Recap Machine Learning: Chenhao Tan Boulder 37 of 39

74 Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Very little? A fair amount? A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

75 Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? A fair amount? A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

76 Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes A fair amount? A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

77 Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes A fair amount? SVM A huge amount Machine Learning: Chenhao Tan Boulder 38 of 39

78 Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? Before neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes A fair amount? SVM A huge amount Doesn t matter, use whatever works Machine Learning: Chenhao Tan Boulder 38 of 39

79 Recap Choosing what kind of classifier to use When building a text classifier, first question: how much training data is there currently available? With neural networks: None? Hand write rules or collect more data (possibly with active learning) Very little? Naïve Bayes, consider using pre-trained representation/model with neural networks A fair amount? SVM, consider using pre-trained representation/model with neural networks A huge amount Probably neural networks Trade-off between interpretability and model performance Machine Learning: Chenhao Tan Boulder 38 of 39

80 Recap SVM extensions: What s next Finding solutions Slack variables: not perfect line Kernels: different geometries Machine Learning: Chenhao Tan Boulder 39 of 39

Support Vector Machines

Support Vector Machines Support Vector Machines Jordan Boyd-Graber University of Colorado Boulder LECTURE 7 Slides adapted from Tom Mitchell, Eric Xing, and Lauren Hannah Jordan Boyd-Graber Boulder Support Vector Machines 1 of

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Machine Learning: Jordan Boyd-Graber University of Maryland SUPPORT VECTOR MACHINES Slides adapted from Tom Mitchell, Eric Xing, and Lauren Hannah Machine Learning: Jordan

More information

Clustering. Introduction to Data Science. Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM LAUREN HANNAH

Clustering. Introduction to Data Science. Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM LAUREN HANNAH Clustering Introduction to Data Science Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM LAUREN HANNAH Slides adapted from Tom Mitchell, Eric Xing, and Lauren Hannah Introduction to Data Science

More information

Classification II: Decision Trees and SVMs

Classification II: Decision Trees and SVMs Classification II: Decision Trees and SVMs Digging into Data: Jordan Boyd-Graber February 25, 2013 Slides adapted from Tom Mitchell, Eric Xing, and Lauren Hannah Digging into Data: Jordan Boyd-Graber ()

More information

Machine Learning Lecture 7

Machine Learning Lecture 7 Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant

More information

Lecture Support Vector Machine (SVM) Classifiers

Lecture Support Vector Machine (SVM) Classifiers Introduction to Machine Learning Lecturer: Amir Globerson Lecture 6 Fall Semester Scribe: Yishay Mansour 6.1 Support Vector Machine (SVM) Classifiers Classification is one of the most important tasks in

More information

SVM. Introduction to Data Science Algorithms Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM HINRICH SCHÜTZE

SVM. Introduction to Data Science Algorithms Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM HINRICH SCHÜTZE SVM Introduction to Data Science Algorithms Jordan Boyd-Graber and Michael Paul SLIDES ADAPTED FROM HINRICH SCHÜTZE Introduction to Data Science Algorithms Boyd-Graber and Paul SVM 1 of 11 Find the maximum

More information

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 5

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 5 Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 5 Slides adapted from Jordan Boyd-Graber, Tom Mitchell, Ziv Bar-Joseph Machine Learning: Chenhao Tan Boulder 1 of 27 Quiz question For

More information

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers

10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers Computational Methods for Data Analysis Massimo Poesio SUPPORT VECTOR MACHINES Support Vector Machines Linear classifiers 1 Linear Classifiers denotes +1 denotes -1 w x + b>0 f(x,w,b) = sign(w x + b) How

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Support Vector Machine. Industrial AI Lab.

Support Vector Machine. Industrial AI Lab. Support Vector Machine Industrial AI Lab. Classification (Linear) Autonomously figure out which category (or class) an unknown item should be categorized into Number of categories / classes Binary: 2 different

More information

Introduction to Machine Learning Midterm Exam

Introduction to Machine Learning Midterm Exam 10-701 Introduction to Machine Learning Midterm Exam Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes, but

More information

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20.

Machine Learning, Midterm Exam: Spring 2008 SOLUTIONS. Q Topic Max. Score Score. 1 Short answer questions 20. 10-601 Machine Learning, Midterm Exam: Spring 2008 Please put your name on this cover sheet If you need more room to work out your answer to a question, use the back of the page and clearly mark on the

More information

CSC 411 Lecture 17: Support Vector Machine

CSC 411 Lecture 17: Support Vector Machine CSC 411 Lecture 17: Support Vector Machine Ethan Fetaya, James Lucas and Emad Andrews University of Toronto CSC411 Lec17 1 / 1 Today Max-margin classification SVM Hard SVM Duality Soft SVM CSC411 Lec17

More information

SUPPORT VECTOR MACHINE

SUPPORT VECTOR MACHINE SUPPORT VECTOR MACHINE Mainly based on https://nlp.stanford.edu/ir-book/pdf/15svm.pdf 1 Overview SVM is a huge topic Integration of MMDS, IIR, and Andrew Moore s slides here Our foci: Geometric intuition

More information

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning

LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES. Supervised Learning LINEAR CLASSIFICATION, PERCEPTRON, LOGISTIC REGRESSION, SVC, NAÏVE BAYES Supervised Learning Linear vs non linear classifiers In K-NN we saw an example of a non-linear classifier: the decision boundary

More information

Introduction to Logistic Regression and Support Vector Machine

Introduction to Logistic Regression and Support Vector Machine Introduction to Logistic Regression and Support Vector Machine guest lecturer: Ming-Wei Chang CS 446 Fall, 2009 () / 25 Fall, 2009 / 25 Before we start () 2 / 25 Fall, 2009 2 / 25 Before we start Feel

More information

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs E0 270 Machine Learning Lecture 5 (Jan 22, 203) Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are a brief summary of the topics covered in

More information

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM 1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table

More information

ECE 5424: Introduction to Machine Learning

ECE 5424: Introduction to Machine Learning ECE 5424: Introduction to Machine Learning Topics: Ensemble Methods: Bagging, Boosting PAC Learning Readings: Murphy 16.4;; Hastie 16 Stefan Lee Virginia Tech Fighting the bias-variance tradeoff Simple

More information

Support Vector Machine. Industrial AI Lab. Prof. Seungchul Lee

Support Vector Machine. Industrial AI Lab. Prof. Seungchul Lee Support Vector Machine Industrial AI Lab. Prof. Seungchul Lee Classification (Linear) Autonomously figure out which category (or class) an unknown item should be categorized into Number of categories /

More information

Support Vector Machines, Kernel SVM

Support Vector Machines, Kernel SVM Support Vector Machines, Kernel SVM Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 27, 2017 1 / 40 Outline 1 Administration 2 Review of last lecture 3 SVM

More information

CS145: INTRODUCTION TO DATA MINING

CS145: INTRODUCTION TO DATA MINING CS145: INTRODUCTION TO DATA MINING 5: Vector Data: Support Vector Machine Instructor: Yizhou Sun yzsun@cs.ucla.edu October 18, 2017 Homework 1 Announcements Due end of the day of this Thursday (11:59pm)

More information

Linear Classifiers: Expressiveness

Linear Classifiers: Expressiveness Linear Classifiers: Expressiveness Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Lecture outline Linear classifiers: Introduction What functions do linear classifiers express?

More information

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1

Decision Trees. Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1 Decision Trees Data Science: Jordan Boyd-Graber University of Maryland MARCH 11, 2018 Data Science: Jordan Boyd-Graber UMD Decision Trees 1 / 1 Roadmap Classification: machines labeling data for us Last

More information

Machine Learning Practice Page 2 of 2 10/28/13

Machine Learning Practice Page 2 of 2 10/28/13 Machine Learning 10-701 Practice Page 2 of 2 10/28/13 1. True or False Please give an explanation for your answer, this is worth 1 pt/question. (a) (2 points) No classifier can do better than a naive Bayes

More information

Multi-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University

Multi-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University Multi-class SVMs Lecture 17: Aykut Erdem April 2016 Hacettepe University Administrative We will have a make-up lecture on Saturday April 23, 2016. Project progress reports are due April 21, 2016 2 days

More information

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Lecture 16: Modern Classification (I) - Separating Hyperplanes Lecture 16: Modern Classification (I) - Separating Hyperplanes Outline 1 2 Separating Hyperplane Binary SVM for Separable Case Bayes Rule for Binary Problems Consider the simplest case: two classes are

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 Outlines Overview Introduction Linear Algebra Probability Linear Regression

More information

Computational Learning Theory

Computational Learning Theory Computational Learning Theory Pardis Noorzad Department of Computer Engineering and IT Amirkabir University of Technology Ordibehesht 1390 Introduction For the analysis of data structures and algorithms

More information

Machine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang

Machine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang Machine Learning Basics Lecture 4: SVM I Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation Given training data x i, y i : 1 i n i.i.d. from distribution

More information

Introduction to Machine Learning Midterm Exam Solutions

Introduction to Machine Learning Midterm Exam Solutions 10-701 Introduction to Machine Learning Midterm Exam Solutions Instructors: Eric Xing, Ziv Bar-Joseph 17 November, 2015 There are 11 questions, for a total of 100 points. This exam is open book, open notes,

More information

Max Margin-Classifier

Max Margin-Classifier Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization

More information

Support Vector Machines. Machine Learning Fall 2017

Support Vector Machines. Machine Learning Fall 2017 Support Vector Machines Machine Learning Fall 2017 1 Where are we? Learning algorithms Decision Trees Perceptron AdaBoost 2 Where are we? Learning algorithms Decision Trees Perceptron AdaBoost Produce

More information

Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Support Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI Support Vector Machines CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Linear Classifier Naive Bayes Assume each attribute is drawn from Gaussian distribution with the same variance Generative model:

More information

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Engineering Part IIB: Module 4F0 Statistical Pattern Processing Lecture 5: Single Layer Perceptrons & Estimating Linear Classifiers Phil Woodland: pcw@eng.cam.ac.uk Michaelmas 202 Engineering Part IIB:

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 11, 2012 Today: Computational Learning Theory Probably Approximately Coorrect (PAC) learning theorem

More information

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie Computational Biology Program Memorial Sloan-Kettering Cancer Center http://cbio.mskcc.org/leslielab

More information

Classification: The PAC Learning Framework

Classification: The PAC Learning Framework Classification: The PAC Learning Framework Machine Learning: Jordan Boyd-Graber University of Colorado Boulder LECTURE 5 Slides adapted from Eli Upfal Machine Learning: Jordan Boyd-Graber Boulder Classification:

More information

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley

Natural Language Processing. Classification. Features. Some Definitions. Classification. Feature Vectors. Classification I. Dan Klein UC Berkeley Natural Language Processing Classification Classification I Dan Klein UC Berkeley Classification Automatically make a decision about inputs Example: document category Example: image of digit digit Example:

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 14: Vector Space Classification Hinrich Schütze Center for Information and Language Processing, University of Munich 2013-05-28

More information

L5 Support Vector Classification

L5 Support Vector Classification L5 Support Vector Classification Support Vector Machine Problem definition Geometrical picture Optimization problem Optimization Problem Hard margin Convexity Dual problem Soft margin problem Alexander

More information

ECE521 Lecture7. Logistic Regression

ECE521 Lecture7. Logistic Regression ECE521 Lecture7 Logistic Regression Outline Review of decision theory Logistic regression A single neuron Multi-class classification 2 Outline Decision theory is conceptually easy and computationally hard

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Le Song Machine Learning I CSE 6740, Fall 2013 Naïve Bayes classifier Still use Bayes decision rule for classification P y x = P x y P y P x But assume p x y = 1 is fully factorized

More information

Support Vector Machines and Kernel Methods

Support Vector Machines and Kernel Methods 2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University

More information

IFT Lecture 7 Elements of statistical learning theory

IFT Lecture 7 Elements of statistical learning theory IFT 6085 - Lecture 7 Elements of statistical learning theory This version of the notes has not yet been thoroughly checked. Please report any bugs to the scribes or instructor. Scribe(s): Brady Neal and

More information

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 7301: Advanced Machine Learning Vibhav Gogate The University of Texas at Dallas Supervised Learning Issues in supervised learning What makes learning hard Point Estimation: MLE vs Bayesian

More information

MIRA, SVM, k-nn. Lirong Xia

MIRA, SVM, k-nn. Lirong Xia MIRA, SVM, k-nn Lirong Xia Linear Classifiers (perceptrons) Inputs are feature values Each feature has a weight Sum is the activation activation w If the activation is: Positive: output +1 Negative, output

More information

CSE 546 Final Exam, Autumn 2013

CSE 546 Final Exam, Autumn 2013 CSE 546 Final Exam, Autumn 0. Personal info: Name: Student ID: E-mail address:. There should be 5 numbered pages in this exam (including this cover sheet).. You can use any material you brought: any book,

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Reading: Ben-Hur & Weston, A User s Guide to Support Vector Machines (linked from class web page) Notation Assume a binary classification problem. Instances are represented by vector

More information

Online Learning. Jordan Boyd-Graber. University of Colorado Boulder LECTURE 21. Slides adapted from Mohri

Online Learning. Jordan Boyd-Graber. University of Colorado Boulder LECTURE 21. Slides adapted from Mohri Online Learning Jordan Boyd-Graber University of Colorado Boulder LECTURE 21 Slides adapted from Mohri Jordan Boyd-Graber Boulder Online Learning 1 of 31 Motivation PAC learning: distribution fixed over

More information

Linear Models for Classification

Linear Models for Classification Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,

More information

Support Vector Machines for Classification and Regression

Support Vector Machines for Classification and Regression CIS 520: Machine Learning Oct 04, 207 Support Vector Machines for Classification and Regression Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may

More information

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017

Machine Learning. Support Vector Machines. Fabio Vandin November 20, 2017 Machine Learning Support Vector Machines Fabio Vandin November 20, 2017 1 Classification and Margin Consider a classification problem with two classes: instance set X = R d label set Y = { 1, 1}. Training

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 11, 2012 Today: Computational Learning Theory Probably Approximately Coorrect (PAC) learning theorem

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2018 CS 551, Fall

More information

PAC-learning, VC Dimension and Margin-based Bounds

PAC-learning, VC Dimension and Margin-based Bounds More details: General: http://www.learning-with-kernels.org/ Example of more complex bounds: http://www.research.ibm.com/people/t/tzhang/papers/jmlr02_cover.ps.gz PAC-learning, VC Dimension and Margin-based

More information

Bias-Variance Tradeoff

Bias-Variance Tradeoff What s learning, revisited Overfitting Generative versus Discriminative Logistic Regression Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University September 19 th, 2007 Bias-Variance Tradeoff

More information

Jeff Howbert Introduction to Machine Learning Winter

Jeff Howbert Introduction to Machine Learning Winter Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Support Vector Machine (SVM) Hamid R. Rabiee Hadi Asheri, Jafar Muhammadi, Nima Pourdamghani Spring 2013 http://ce.sharif.edu/courses/91-92/2/ce725-1/ Agenda Introduction

More information

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Support Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Data Mining Support Vector Machines Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar 02/03/2018 Introduction to Data Mining 1 Support Vector Machines Find a linear hyperplane

More information

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Bayesian learning: Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function. Let y be the true label and y be the predicted

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Support vector machines Lecture 4

Support vector machines Lecture 4 Support vector machines Lecture 4 David Sontag New York University Slides adapted from Luke Zettlemoyer, Vibhav Gogate, and Carlos Guestrin Q: What does the Perceptron mistake bound tell us? Theorem: The

More information

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write

More information

Announcements - Homework

Announcements - Homework Announcements - Homework Homework 1 is graded, please collect at end of lecture Homework 2 due today Homework 3 out soon (watch email) Ques 1 midterm review HW1 score distribution 40 HW1 total score 35

More information

More about the Perceptron

More about the Perceptron More about the Perceptron CMSC 422 MARINE CARPUAT marine@cs.umd.edu Credit: figures by Piyush Rai and Hal Daume III Recap: Perceptron for binary classification Classifier = hyperplane that separates positive

More information

ECE 5984: Introduction to Machine Learning

ECE 5984: Introduction to Machine Learning ECE 5984: Introduction to Machine Learning Topics: Ensemble Methods: Bagging, Boosting Readings: Murphy 16.4; Hastie 16 Dhruv Batra Virginia Tech Administrativia HW3 Due: April 14, 11:55pm You will implement

More information

Machine Learning for NLP

Machine Learning for NLP Machine Learning for NLP Linear Models Joakim Nivre Uppsala University Department of Linguistics and Philology Slides adapted from Ryan McDonald, Google Research Machine Learning for NLP 1(26) Outline

More information

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet.

The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet. CS 189 Spring 013 Introduction to Machine Learning Final You have 3 hours for the exam. The exam is closed book, closed notes except your one-page (two sides) or two-page (one side) crib sheet. Please

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Hypothesis Space variable size deterministic continuous parameters Learning Algorithm linear and quadratic programming eager batch SVMs combine three important ideas Apply optimization

More information

CSC321 Lecture 4 The Perceptron Algorithm

CSC321 Lecture 4 The Perceptron Algorithm CSC321 Lecture 4 The Perceptron Algorithm Roger Grosse and Nitish Srivastava January 17, 2017 Roger Grosse and Nitish Srivastava CSC321 Lecture 4 The Perceptron Algorithm January 17, 2017 1 / 1 Recap:

More information

PAC-learning, VC Dimension and Margin-based Bounds

PAC-learning, VC Dimension and Margin-based Bounds More details: General: http://www.learning-with-kernels.org/ Example of more complex bounds: http://www.research.ibm.com/people/t/tzhang/papers/jmlr02_cover.ps.gz PAC-learning, VC Dimension and Margin-based

More information

Linear classifiers Lecture 3

Linear classifiers Lecture 3 Linear classifiers Lecture 3 David Sontag New York University Slides adapted from Luke Zettlemoyer, Vibhav Gogate, and Carlos Guestrin ML Methodology Data: labeled instances, e.g. emails marked spam/ham

More information

The Perceptron Algorithm, Margins

The Perceptron Algorithm, Margins The Perceptron Algorithm, Margins MariaFlorina Balcan 08/29/2018 The Perceptron Algorithm Simple learning algorithm for supervised classification analyzed via geometric margins in the 50 s [Rosenblatt

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

18.9 SUPPORT VECTOR MACHINES

18.9 SUPPORT VECTOR MACHINES 744 Chapter 8. Learning from Examples is the fact that each regression problem will be easier to solve, because it involves only the examples with nonzero weight the examples whose kernels overlap the

More information

CMU-Q Lecture 24:

CMU-Q Lecture 24: CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input

More information

Mining Classification Knowledge

Mining Classification Knowledge Mining Classification Knowledge Remarks on NonSymbolic Methods JERZY STEFANOWSKI Institute of Computing Sciences, Poznań University of Technology COST Doctoral School, Troina 2008 Outline 1. Bayesian classification

More information

Machine Learning for NLP

Machine Learning for NLP Machine Learning for NLP Uppsala University Department of Linguistics and Philology Slides borrowed from Ryan McDonald, Google Research Machine Learning for NLP 1(50) Introduction Linear Classifiers Classifiers

More information

Final Overview. Introduction to ML. Marek Petrik 4/25/2017

Final Overview. Introduction to ML. Marek Petrik 4/25/2017 Final Overview Introduction to ML Marek Petrik 4/25/2017 This Course: Introduction to Machine Learning Build a foundation for practice and research in ML Basic machine learning concepts: max likelihood,

More information

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

LINEAR CLASSIFIERS. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition LINEAR CLASSIFIERS Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification, the input

More information

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012 Classification CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Topics Discriminant functions Logistic regression Perceptron Generative models Generative vs. discriminative

More information

Statistical and Computational Learning Theory

Statistical and Computational Learning Theory Statistical and Computational Learning Theory Fundamental Question: Predict Error Rates Given: Find: The space H of hypotheses The number and distribution of the training examples S The complexity of the

More information

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Logistic Regression Matt Gormley Lecture 8 Feb. 12, 2018 1 10-601 Introduction

More information

VBM683 Machine Learning

VBM683 Machine Learning VBM683 Machine Learning Pinar Duygulu Slides are adapted from Dhruv Batra Bias is the algorithm's tendency to consistently learn the wrong thing by not taking into account all the information in the data

More information

Machine Learning, Midterm Exam

Machine Learning, Midterm Exam 10-601 Machine Learning, Midterm Exam Instructors: Tom Mitchell, Ziv Bar-Joseph Wednesday 12 th December, 2012 There are 9 questions, for a total of 100 points. This exam has 20 pages, make sure you have

More information

Introduction to SVM and RVM

Introduction to SVM and RVM Introduction to SVM and RVM Machine Learning Seminar HUS HVL UIB Yushu Li, UIB Overview Support vector machine SVM First introduced by Vapnik, et al. 1992 Several literature and wide applications Relevance

More information

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6

Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6 Machine Learning: Chenhao Tan University of Colorado Boulder LECTURE 6 Slides adapted from Jordan Boyd-Graber, Chris Ketelsen Machine Learning: Chenhao Tan Boulder 1 of 39 HW1 turned in HW2 released Office

More information

Linear and Logistic Regression. Dr. Xiaowei Huang

Linear and Logistic Regression. Dr. Xiaowei Huang Linear and Logistic Regression Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Two Classical Machine Learning Algorithms Decision tree learning K-nearest neighbor Model Evaluation Metrics

More information

FINAL: CS 6375 (Machine Learning) Fall 2014

FINAL: CS 6375 (Machine Learning) Fall 2014 FINAL: CS 6375 (Machine Learning) Fall 2014 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for

More information

Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron

Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron CS446: Machine Learning, Fall 2017 Lecture 14 : Online Learning, Stochastic Gradient Descent, Perceptron Lecturer: Sanmi Koyejo Scribe: Ke Wang, Oct. 24th, 2017 Agenda Recap: SVM and Hinge loss, Representer

More information

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas Midterm Review CS 6375: Machine Learning Vibhav Gogate The University of Texas at Dallas Machine Learning Supervised Learning Unsupervised Learning Reinforcement Learning Parametric Y Continuous Non-parametric

More information

Lecture 8. Instructor: Haipeng Luo

Lecture 8. Instructor: Haipeng Luo Lecture 8 Instructor: Haipeng Luo Boosting and AdaBoost In this lecture we discuss the connection between boosting and online learning. Boosting is not only one of the most fundamental theories in machine

More information

Pattern Recognition 2018 Support Vector Machines

Pattern Recognition 2018 Support Vector Machines Pattern Recognition 2018 Support Vector Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recognition 1 / 48 Support Vector Machines Ad Feelders ( Universiteit Utrecht

More information