CS 188: Artificial Intelligence Spring Announcements

Similar documents
CS 188: Artificial Intelligence Spring Announcements

Announcements. CS 188: Artificial Intelligence Spring Classification. Today. Classification overview. Case-Based Reasoning

CS 188: Artificial Intelligence. Outline

MIRA, SVM, k-nn. Lirong Xia

CS 188: Artificial Intelligence Fall 2011

Errors, and What to Do. CS 188: Artificial Intelligence Fall What to Do About Errors. Later On. Some (Simplified) Biology

Learning: Binary Perceptron. Examples: Perceptron. Separable Case. In the space of feature vectors

CS 343: Artificial Intelligence

CS 5522: Artificial Intelligence II

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence. Machine Learning

Machine Learning. Kernels. Fall (Kernels, Kernelized Perceptron and SVM) Professor Liang Huang. (Chap. 12 of CIML)

Support vector machines Lecture 4

Algorithms for NLP. Classifica(on III. Taylor Berg- Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Non-Linearity. CS 188: Artificial Intelligence. Non-Linear Separators. Non-Linear Separators. Deep Learning I

CS 188: Artificial Intelligence Spring Announcements

Support Vector Machines

Kernel Methods. Barnabás Póczos

Kernel Methods and Support Vector Machines

Jeff Howbert Introduction to Machine Learning Winter

COMS 4771 Introduction to Machine Learning. Nakul Verma

CS145: INTRODUCTION TO DATA MINING

18.9 SUPPORT VECTOR MACHINES

Introduction to Logistic Regression and Support Vector Machine

CMU-Q Lecture 24:

Announcements - Homework

Linear & nonlinear classifiers

EE613 Machine Learning for Engineers. Kernel methods Support Vector Machines. jean-marc odobez 2015

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring 2009

Machine Learning Practice Page 2 of 2 10/28/13

SVMs, Duality and the Kernel Trick

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

Announcements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation

Warm up: risk prediction with logistic regression

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Announcements. CS 188: Artificial Intelligence Fall Markov Models. Example: Markov Chain. Mini-Forward Algorithm. Example

CS 188: Artificial Intelligence Spring Announcements

Support Vector Machines: Maximum Margin Classifiers

Machine Learning (CSE 446): Multi-Class Classification; Kernel Methods

Support Vector Machines

PAC-learning, VC Dimension and Margin-based Bounds

Support Vector Machines.

CS 188: Artificial Intelligence Fall 2008

General Naïve Bayes. CS 188: Artificial Intelligence Fall Example: Overfitting. Example: OCR. Example: Spam Filtering. Example: Spam Filtering

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

Algorithms for NLP. Classification II. Taylor Berg-Kirkpatrick CMU Slides: Dan Klein UC Berkeley

Deviations from linear separability. Kernel methods. Basis expansion for quadratic boundaries. Adding new features Systematic deviation

CSE 546 Final Exam, Autumn 2013

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Multiclass Classification-1

Support Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM

Kernel methods CSE 250B

ECE 5984: Introduction to Machine Learning

Applied Machine Learning Annalisa Marsico

Kernelized Perceptron Support Vector Machines

CS 188: Artificial Intelligence Spring Announcements

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

CS 188: Artificial Intelligence

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Announcements. CS 188: Artificial Intelligence Fall VPI Example. VPI Properties. Reasoning over Time. Markov Models. Lecture 19: HMMs 11/4/2008

Classification CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Kernel Methods. Foundations of Data Analysis. Torsten Möller. Möller/Mori 1

(Kernels +) Support Vector Machines

Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012

Support Vector Machines

Multi-class SVMs. Lecture 17: Aykut Erdem April 2016 Hacettepe University

Support Vector Machine II

Kernels and the Kernel Trick. Machine Learning Fall 2017

CSC 411 Lecture 17: Support Vector Machine

Machine Learning. Hal Daumé III. Computer Science University of Maryland CS 421: Introduction to Artificial Intelligence 8 May 2012

ECE521 week 3: 23/26 January 2017

Machine Learning and Data Mining. Support Vector Machines. Kalev Kask

CS325 Artificial Intelligence Chs. 18 & 4 Supervised Machine Learning (cont)

Vote. Vote on timing for night section: Option 1 (what we have now) Option 2. Lecture, 6:10-7:50 25 minute dinner break Tutorial, 8:15-9

Logistic Regression Introduction to Machine Learning. Matt Gormley Lecture 8 Feb. 12, 2018

Classification and Pattern Recognition

Gaussian and Linear Discriminant Analysis; Multiclass Classification

Support Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Learning Theory. Piyush Rai. CS5350/6350: Machine Learning. September 27, (CS5350/6350) Learning Theory September 27, / 14

Kaggle.

CSE546: SVMs, Dual Formula5on, and Kernels Winter 2012

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

Linear & nonlinear classifiers

FINAL EXAM: FALL 2013 CS 6375 INSTRUCTOR: VIBHAV GOGATE

PAC-learning, VC Dimension and Margin-based Bounds

Support Vector Machines and Kernel Methods

Loss Functions, Decision Theory, and Linear Models

Natural Language Processing. Classification. Margin. Linear Models: Perceptron. Issues with Perceptrons. Problems with Perceptrons.

FINAL: CS 6375 (Machine Learning) Fall 2014

A short introduction to supervised learning, with applications to cancer pathway analysis Dr. Christina Leslie

Machine Learning. Nonparametric Methods. Space of ML Problems. Todo. Histograms. Instance-Based Learning (aka non-parametric methods)

Markov Models. CS 188: Artificial Intelligence Fall Example. Mini-Forward Algorithm. Stationary Distributions.

Topics we covered. Machine Learning. Statistics. Optimization. Systems! Basics of probability Tail bounds Density Estimation Exponential Families

CSE446: non-parametric methods Spring 2017

Machine Learning Support Vector Machines. Prof. Matteo Matteucci

Mining Classification Knowledge

AE = q < H(p < ) + (1 q < )H(p > ) H(p) = p lg(p) (1 p) lg(1 p)

Review: Support vector machines. Machine learning techniques and image analysis

Brief Introduction to Machine Learning

Transcription:

CS 188: Artificial Intelligence Spring 2010 Lecture 24: Perceptrons and More! 4/22/2010 Pieter Abbeel UC Berkeley Slides adapted from Dan Klein Announcements W7 due tonight [this is your last written for the semester!] Project 5 out tonight --- Classification! 1

Announcements (2) Contest logistics Up and running! Tournaments every night Final tournament: We will use submissions received by Thursday May 6, 11pm. Contest extra credit through bonus points on final exam [all based on final ranking] 0.5pt for beating Staff 0.5pt for beating Fa09-TeamA (top 5), Fa09-TeamB (top 10), and Fa09-TeamC (top 20) from last semester [total of 1.5pts to be earned] 1pt for being 3 rd 2pts for being 2 nd 3pts for being 1 st Where are we and what s left? So far: Search CSPs Adversarial search MDPs and RL Bayes nets, probabilistic inference Machine learning Today: Machine Learning part III: knn and kernels Tuesday: Applications in Robotics Thursday: Applications in Vision and Language + Conclusion + Where to learn more 2

Classification Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0... SPAM or + PIXEL-7,12 : 1 PIXEL-7,13 : 0... NUM_LOOPS : 1... 2 Classification overview Naïve Bayes: Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) Perceptron: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate SVM: Properties similar to perceptron Convex optimization formulation Nearest-Neighbor: Non-parametric: more expressive with more training data Kernels Efficient way to make linear learning architectures into nonlinear ones 3

Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions Sound familiar? [Demo] http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/knn.html 10 Parametric / Non-parametric Parametric models: Fixed set of parameters More data means better settings Non-parametric models: Complexity of the classifier increases with data Better in the limit, often worse in the non-limit (K)NN is non-parametric Truth 2 Examples 10 Examples 100 Examples 10000 Examples 11 4

Nearest-Neighbor Classification Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: What s the similarity function? Dot product of two images vectors? Usually normalize vectors so x = 1 min = 0 (when?), max = 1 (when?) 12 Basic Similarity Many similarities based on feature dot products: If features are just the pixels: Note: not all similarities are of this form 13 5

Invariant Metrics Better distances use knowledge about vision Invariant metrics: Similarities are invariant under certain transformations Rotation, scaling, translation, stroke-thickness E.g: 16 x 16 = 256 pixels; a point in 256-dim space Small similarity in R 256 (why?) Variety of invariant metrics in literature Viable alternative: transform training examples such that training set includes all variations 14 Classification overview Naïve Bayes Perceptron SVM Nearest-Neighbor Kernels 6

A Tale of Two Approaches Nearest neighbor-like approaches Can use fancy similarity functions Don t actually get to do explicit learning Perceptron-like approaches Explicit training to reduce empirical error Can t use fancy similarity, only linear Or can they? Let s find out! 19 Perceptron Weights What is the final value of a weight w y of a perceptron? Can it be any real vector? No! It s built by adding up inputs. Can reconstruct weight vectors (the primal representation) from update counts (the dual representation) 20 7

Dual Perceptron How to classify a new example x? If someone tells us the value of K for each pair of examples, never need to build the weight vectors! 21 Dual Perceptron Start with zero counts (alpha) Pick up training instances one by one Try to classify x n, If correct, no change! If wrong: lower count of wrong class (for this instance), raise score of right class (for this instance) 22 8

Kernelized Perceptron If we had a black box (kernel) which told us the dot product of two examples x and y: Could work entirely with the dual representation No need to ever take dot products ( kernel trick ) Like nearest neighbor work with black-box similarities Downside: slow if many examples get nonzero alpha 23 Kernels: Who Cares? So far: a very strange way of doing a very simple calculation Kernel trick : we can substitute any* similarity function in place of the dot product Lets us learn new kinds of hypothesis * Fine print: if your kernel doesn t satisfy certain technical requirements, lots of proofs break. E.g. convergence, mistake bounds. In practice, 25 illegal kernels sometimes work (but not always). 9

Non-Linear Separators Data that is linearly separable (with some noise) works out great: 0 x But what are we going to do if the dataset is just too hard? 0 x How about mapping data to a higher-dimensional space: x 2 0 x This and next few slides adapted from Ray Mooney, UT 26 Non-Linear Separators General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x φ(x) 27 10

Some Kernels Kernels implicitly map original vectors to higher dimensional spaces, take the dot product there, and hand the result back Linear kernel: φ(x)=x Quadratic kernel: Forx R 3 : φ(x)=[x 1 x 1 x 1 x 2 x 1 x 3 x 2 x 1 x 2 x 2 x 2 x 3 x 3 x 1 x 3 x 2 x 3 x 3 2x1 2x2 2x3 1] 28 Polynomial kernel: Some Kernels (2) Forx R 3 : φ(x)=[x d 1 x d 2 x d 3 dx d 1 1 x 2 dx d 1 1 x 3... dx 1 dx2 dx3 1] For x R n the d-order polynomial kernel s implicit feature space is ( ) n+d d dimensional. By contrast, computing the kernel directly only requires O(n) time. 11

Some Kernels (3) Kernels implicitly map original vectors to higher dimensional spaces, take the dot product there, and hand the result back Radial Basis Function (or Gaussian) Kernel: infinite dimensional representation Discrete kernels: e.g. string kernels Features: all possible strings up to some length To compute kernel: don t need to enumerate all substrings for each word, but only need to find strings appearing in both x and x 30 Why Kernels? Can t you just add these features on your own (e.g. add all pairs of features instead of using the quadratic kernel)? Yes, in principle, just compute them No need to modify any algorithms But, number of features can get large (or infinite) Kernels let us compute with these features implicitly Example: implicit dot product in polynomial, Gaussian and string kernel takes much less space and time per dot product Of course, there s the cost for using the pure dual algorithms: you need to compute the similarity to every training datum 12

Recap: Classification Classification systems: Supervised learning Make a prediction given evidence We ve seen several methods for this Useful when you have labeled data 33 Where are we and what s left? So far foundations: Search, CSPs, Adversarial search, MDPs and RL, Bayes nets and probabilistic inference, Machine learning Tuesday: Applications in Robotics Thursday: Applications in Vision and Language + Conclusion + Where/How to learn more 13