Andrew W. Moore Professor School of Computer Science Carnegie Mellon University
|
|
- Shanon Warren
- 6 years ago
- Views:
Transcription
1 Spport Vector Machines Note to other teachers and sers of these slides. Andrew wold be delighted if yo fond this sorce material sefl in giving yor own lectres. Feel free to se these slides verbatim, or to modify them to fit yor own needs. PowerPoint originals are available. If yo make se of a significant portion of these slides in yor own lectre, please inclde this message, or the following link to the sorce repository of Andrew s ttorials: Comments and corrections grateflly received. Andrew W. Moore Professor School of Compter Science Carnegie Mellon University awm@cs.cm.ed Nov 23rd, 2001
2 Overviews Proposed by Vapnik and his colleages - Started in 1963, taking shape in late 70 s as part of his statistical learning theory (with Chervonenkis) - Crrent form established in early 90 s (with Cortes) Becomes poplar in last decade - Classification, regression (fnction approx.) optimization - Compared favorably to MLP Basic ideas - Overcoming linear seperability problem by transforming the problem into higher dimensional space sing kernel fnctions - (become eqiv. to 2-layer perceptron when kernel is sigmoid fnction) - Maximize margin of decision bondary Spport Vector Machines: Slide 2
3 Linear Classifiers x f y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How wold yo classify this data? Spport Vector Machines: Slide 3
4 Linear Classifiers x f y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How wold yo classify this data? Spport Vector Machines: Slide 4
5 Linear Classifiers x f y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How wold yo classify this data? Spport Vector Machines: Slide 5
6 Linear Classifiers x f y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How wold yo classify this data? Spport Vector Machines: Slide 6
7 Linear Classifiers x f y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Any of these wold be fine....bt which is best? Spport Vector Machines: Slide 7
8 Classifier Margin x f y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Define the margin of a linear classifier as the width that the bondary cold be increased by before hitting a datapoint. Spport Vector Machines: Slide 8
9 Maximm Margin x f y est denotes +1 denotes -1 Linear SVM f(x,w,b) = sign(w. x - b) The maximm margin linear classifier is the linear classifier with the, m, maximm margin. This is the simplest kind of SVM (Called an LSVM) Spport Vector Machines: Slide 9
10 Maximm Margin x f y est denotes +1 denotes -1 Spport Vectors are those datapoints that the margin pshes p against Linear SVM f(x,w,b) = sign(w. x - b) The maximm margin linear classifier is the linear classifier with the, m, maximm margin. This is the simplest kind of SVM (Called an LSVM) Spport Vector Machines: Slide 10
11 Why Maximm Margin? denotes +1 denotes Intitively this feels safest. 2. If we ve made f(x,w,b) a small = sign(w. error in xthe - b) location of the bondary (it s been jolted in its perpendiclar The maximm direction) this gives s least margin chance linear of casing a misclassification. classifier is the linear classifier with the, m, maximm margin. 3. CV is easy since the model is immne Spport Vectors to removal of any non-spport-vector are those datapoints. datapoints that the margin 4. There s some theory that this is a This is the pshes p good thing. against simplest kind of 5. Empirically it works very very well. SVM (Called an LSVM) Spport Vector Machines: Slide 11
12 Specifying a line and margin Pls-Plane Classifier Bondary Mins-Plane How do we represent this mathematically? in m inpt dimensions? Spport Vector Machines: Slide 12
13 Specifying a line and margin Pls-Plane Classifier Bondary Mins-Plane Pls-plane = { x : w. x + b = +1 } Mins-plane = { x : w. x + b = -1 } Classify as.. +1 if w. x + b >= 1-1 if w. x + b <= -1 Universe if -1 < w. x + b < 1 explodes Spport Vector Machines: Slide 13
14 Compting the margin width M = Margin Width How do we compte M in terms of w and b? Pls-plane = { x : w. x + b = +1 } Mins-plane = { x : w. x + b = -1 } Claim: The vector w is perpendiclar to the Pls Plane. Why? Spport Vector Machines: Slide 14
15 Compting the margin width M = Margin Width Pls-plane = { x : w. x + b = +1 } Mins-plane = { x : w. x + b = -1 } How do we compte M in terms of w and b? Claim: The vector w is perpendiclar to the Pls Plane. Why? And so of corse the vector w is also perpendiclar to the Mins Plane Let and v be two vectors on the Pls Plane. What is w. ( v )? Spport Vector Machines: Slide 15
16 Compting the margin width x + x - M = Margin Width How do we compte M in terms of w and b? Pls-plane = { x : w. x + b = +1 } Mins-plane = { x : w. x + b = -1 } The vector w is perpendiclar to the Pls Plane Let x - be any ypoint on the mins plane Let x + be the closest pls-plane-point to x -. Any location in R m : not necessarily a datapoint Spport Vector Machines: Slide 16
17 Compting the margin width x + x - M = Margin Width How do we compte M in terms of w and b? Pls-plane = { x : w. x + b = +1 } Mins-plane = { x : w. x + b = -1 } The vector w is perpendiclar to the Pls Plane Let x - be any ypoint on the mins plane Let x + be the closest pls-plane-point to x -. Claim: x + = x - + w for some vale of. Why? Spport Vector Machines: Slide 17
18 Compting the margin width x + M = Margin Width The line from x- to x+ is x perpendiclar to the - planes. How do we compte M in terms of w So to and get b? from x - to x + travel some distance in Pls-plane = { x : w. x + b = +1 direction } w. Mins-plane = { x : w. x + b = -1 } The vector w is perpendiclar to the Pls Plane Let x - be any ypoint on the mins plane Let x + be the closest pls-plane-point to x -. Claim: x + = x - + w for some vale of. Why? Spport Vector Machines: Slide 18
19 Compting the margin width x + M = Margin Width x - What we know: w. x + + b = +1 w. x - + b = -1 x + = x - + w x + - x - = M It s now easy to get M in terms of w and b Spport Vector Machines: Slide 19
20 Compting the margin width x + M = Margin Width x - w. (x - + w) + b = 1 What we know: w. x + + b = +1 w. x - + b = -1 x + = x - + w x + - x - = M It s now easy to get M in terms of w and b => w. x - + b + w.w = 1 => -1 + w.w = 1 => λ 2 w.w Spport Vector Machines: Slide 20
21 Compting the margin width x + M = Margin Width = 2 w. w x - M = x + - x - = w = What we know: w. x + + b = +1 w. x - + b = -1 x + = x - + w x + - x - = M 2 λ w.w λ w λ w. w 2 w. w 2 w. w w. w Spport Vector Machines: Slide 21
22 Learning the Maximm Margin Classifier x + M = Margin Width = 2 w. w x - Given a gess of w and b we can Compte whether all data points in the correct half-planes Compte the width of the margin So now we jst need to write a program to search the space of w s s andb s b to find the widest margin that matches all the datapoints. How? Gradient descent? Simlated Annealing? Matrix Inversion? EM? Newton s Method? Spport Vector Machines: Slide 22
23 Learning via Qadratic Programming QP is a well-stdied class of optimization algorithms to maximize a qadratic fnction of some real-valed variables sbject to linear constraints. Spport Vector Machines: Slide 23
24 Find arg max Qadratic Programming c d T R T 2 Qadratic criterion Sbject to And sbject to a a a a a 11 1 a a1 mm b1 1 a a2mm b2 21 n additional linear ineqality : constraints n1 ( n1)1 ( n 2)1 1 1 a... 1 n2 a a 2 ( n1)2 ( n 2)2 2 2 a nm... a... a : m b ( n1) m ( n 2) m n m m b b ( n1) ( n 2) a( ne)1 1 a( ne) a( ne) m m b( ne ) eqali constr e addition nal linea ity raints ar Spport Vector Machines: Slide 24
25 Find arg max Qadratic Programming c d T R T 2 Qadratic criterion Sbject to And sbject to a a a a a 11 1 a a1 mm b1 1 a a2mm b2 21 n additional linear ineqality : constraints n1 ( n1)1 ( n 2)1 1 1 a... 1 n2 a a 2 ( n1)2 ( n 2)2 2 2 a nm... a... a : m b ( n1) m ( n 2) m n m m b b ( n1) ( n 2) a( ne)1 1 a( ne) a( ne) m m b( ne ) eqali constr e addition nal linea ity raints ar Spport Vector Machines: Slide 25
26 Learning the Maximm Margin Classifier M = 2 w.w Given gess of w, b we can Compte whether all data points are in the correct half-planes Compte the margin width Assme R datapoints, each (x k,y k ) where y k =+/- 1 What shold or qadratic optimization criterion be? How many constraints will we have? What shold they be? Spport Vector Machines: Slide 26
27 Sppose we re in 1-dimension What wold SVMs do with this data? x=0 Spport Vector Machines: Slide 27
28 Sppose we re in 1-dimension Not a big srprise x=0 Positive plane Negative plane Spport Vector Machines: Slide 28
29 Harder 1-dimensional dataset That s wiped the smirk off SVM s face. What can be done abot this? x=0 Spport Vector Machines: Slide 29
30 Harder 1-dimensional dataset Remember how permitting nonlinear basis fnctions made linear regression so mch nicer? Let s permit them here too 2 x=0 z ( x, x ) k ( k k Spport Vector Machines: Slide 30
31 Harder 1-dimensional dataset Remember how permitting nonlinear basis fnctions made linear regression so mch nicer? Let s permit them here too 2 x=0 z ( x, x ) k ( k k Spport Vector Machines: Slide 31
32 Common SVM basis fnctions z k = ( polynomial terms of x k of degree 1 to q ) z k = ( radial basis fnctions of x k ) x k c j z k[ j] φ j ( xk ) KernelFn KW z k = ( sigmoid fnctions of x k ) Spport Vector Machines: Slide 32
33 SVM Kernel Fnctions K(a,b)=(a. b +1) d is an example of an SVM Kernel Fnction Beyond polynomials there are other very high dimensional basis fnctions that can be made practical by finding the right Kernel Fnction Radial-Basis-style style Kernel Fnction: K( a, b) exp ( a b) Neral-net-style Kernel Fnction: K( a, b) tanh( a. b ) 2, and are magic parameters that mst be chosen by a model selection method sch as CV or VCSRM* *see last lectre Spport Vector Machines: Slide 33
34 Spport Vector Machines: Slide 34
35 Spport Vector Machines: Slide 35
36 Spport Vector Machines: Slide 36
37 SVM Performance Anecdotally they work very very well indeed. Overcomes linear seperability problem Generalizes well (overfitting not as serios) Example: crrently the best-known classifier on a well-stdied hand-written-character recognition benchmark several reliable people doing practical real-world work claim that SVMs have saved them when their other favorite classifiers did poorly. There is a lot of excitement and religios fervor abot SVMs as of Despite this, some practitioners (inclding yor lectrer) are a little skeptical. Spport Vector Machines: Slide 37
38 Doing mlti-class classification SVMs can only handle two-class otpts (i.e. a categorical otpt variable with arity 2). What can be done? Answer: with otpt arity N, learn N SVM s SVM 1 learns Otpt==1 vs Otpt!= 1 SVM 2 learns Otpt==2 vs Otpt!= 2 : SVM N learns Otpt==N vs Otpt!= N Then to predict the otpt for a new inpt, jst predict with each SVM and find ot which one pts the prediction the frthest into the positive region. Spport Vector Machines: Slide 38
39 References An excellent ttorial on VC-dimension and Spport Vector Machines: C.J.C. Brges. A ttorial on spport vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2): , The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley- Interscience; 1998 Download SVM-light: Statistical Learning Theory by Vladimir Vapnik, Wiley- Interscience; 1998 Spport Vector Machines: Slide 39
FIND A FUNCTION TO CLASSIFY HIGH VALUE CUSTOMERS
LINEAR CLASSIFIER 1 FIND A FUNCTION TO CLASSIFY HIGH VALUE CUSTOMERS x f y High Value Customers Salary Task: Find Nb Orders 150 70 300 100 200 80 120 100 Low Value Customers Salary Nb Orders 40 80 220
More informationSupport Vector Machine & Its Applications
Support Vector Machine & Its Applications A portion (1/3) of the slides are taken from Prof. Andrew Moore s SVM tutorial at http://www.cs.cmu.edu/~awm/tutorials Mingyue Tan The University of British Columbia
More informationVC-dimension for characterizing classifiers
VC-dimension for characterizing classifiers Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to
More informationVC-dimension for characterizing classifiers
VC-dimension for characterizing classifiers Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to
More informationLinear Classification and SVM. Dr. Xin Zhang
Linear Classification and SVM Dr. Xin Zhang Email: eexinzhang@scut.edu.cn What is linear classification? Classification is intrinsically non-linear It puts non-identical things in the same class, so a
More informationSupport Vector Machines
Support Vector Machines Some material on these is slides borrowed from Andrew Moore's excellent machine learning tutorials located at: http://www.cs.cmu.edu/~awm/tutorials/ Where Should We Draw the Line????
More informationSupport Vector Machines. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Support Vector Machines CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Linear Classifier Naive Bayes Assume each attribute is drawn from Gaussian distribution with the same variance Generative model:
More informationSupport Vector Machines II. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Support Vector Machines II CAP 5610: Machine Learning Instructor: Guo-Jun QI 1 Outline Linear SVM hard margin Linear SVM soft margin Non-linear SVM Application Linear Support Vector Machine An optimization
More informationPlan. Lecture: What is Chemoinformatics and Drug Design? Description of Support Vector Machine (SVM) and its used in Chemoinformatics.
Plan Lecture: What is Chemoinformatics and Drug Design? Description of Support Vector Machine (SVM) and its used in Chemoinformatics. Exercise: Example and exercise with herg potassium channel: Use of
More informationCross-validation for detecting and preventing overfitting
Cross-validation for detecting and preventing overfitting A Regression Problem = f() + noise Can we learn f from this data? Note to other teachers and users of these slides. Andrew would be delighted if
More information4. Linear Classification. Kai Yu
4. Liear Classificatio Kai Y Liear Classifiers A simplest classificatio model Help to derstad oliear models Argably the most sefl classificatio method! 2 Liear Classifiers A simplest classificatio model
More informationUVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 9: Classifica8on with Support Vector Machine (cont.
UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 9: Classifica8on with Support Vector Machine (cont.) Yanjun Qi / Jane University of Virginia Department of Computer Science
More information10/05/2016. Computational Methods for Data Analysis. Massimo Poesio SUPPORT VECTOR MACHINES. Support Vector Machines Linear classifiers
Computational Methods for Data Analysis Massimo Poesio SUPPORT VECTOR MACHINES Support Vector Machines Linear classifiers 1 Linear Classifiers denotes +1 denotes -1 w x + b>0 f(x,w,b) = sign(w x + b) How
More informationSupport Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2
More informationSupport Vector Machines
Supervised Learning Methods -nearest-neighbors (-) Decision trees eural netors Support vector machines (SVM) Support Vector Machines Chapter 18.9 and the optional paper Support vector machines by M. Hearst,
More informationEssentials of optimal control theory in ECON 4140
Essentials of optimal control theory in ECON 4140 Things yo need to know (and a detail yo need not care abot). A few words abot dynamic optimization in general. Dynamic optimization can be thoght of as
More informationBayes and Naïve Bayes Classifiers CS434
Bayes and Naïve Bayes Classifiers CS434 In this lectre 1. Review some basic probability concepts 2. Introdce a sefl probabilistic rle - Bayes rle 3. Introdce the learning algorithm based on Bayes rle (ths
More informationLecture Slides for INTRODUCTION TO. Machine Learning. By: Postedited by: R.
Lecture Slides for INTRODUCTION TO Machine Learning By: alpaydin@boun.edu.tr http://www.cmpe.boun.edu.tr/~ethem/i2ml Postedited by: R. Basili Learning a Class from Examples Class C of a family car Prediction:
More informationSUPPORT VECTOR MACHINE
SUPPORT VECTOR MACHINE Mainly based on https://nlp.stanford.edu/ir-book/pdf/15svm.pdf 1 Overview SVM is a huge topic Integration of MMDS, IIR, and Andrew Moore s slides here Our foci: Geometric intuition
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1394 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1394 1 / 34 Table
More informationChapter 4 Supervised learning:
Chapter 4 Spervised learning: Mltilayer Networks II Madaline Other Feedforward Networks Mltiple adalines of a sort as hidden nodes Weight change follows minimm distrbance principle Adaptive mlti-layer
More informationSupport Vector Machines Explained
December 23, 2008 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
More informationFRTN10 Exercise 12. Synthesis by Convex Optimization
FRTN Exercise 2. 2. We want to design a controller C for the stable SISO process P as shown in Figre 2. sing the Yola parametrization and convex optimization. To do this, the control loop mst first be
More informationMachine Learning Lecture 7
Course Outline Machine Learning Lecture 7 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Statistical Learning Theory 23.05.2016 Discriminative Approaches (5 weeks) Linear Discriminant
More informationBLOOM S TAXONOMY. Following Bloom s Taxonomy to Assess Students
BLOOM S TAXONOMY Topic Following Bloom s Taonomy to Assess Stdents Smmary A handot for stdents to eplain Bloom s taonomy that is sed for item writing and test constrction to test stdents to see if they
More informationNeural networks and support vector machines
Neural netorks and support vector machines Perceptron Input x 1 Weights 1 x 2 x 3... x D 2 3 D Output: sgn( x + b) Can incorporate bias as component of the eight vector by alays including a feature ith
More informationKernel Methods. Barnabás Póczos
Kernel Methods Barnabás Póczos Outline Quick Introduction Feature space Perceptron in the feature space Kernels Mercer s theorem Finite domain Arbitrary domain Kernel families Constructing new kernels
More informationMachine Learning and Data Mining. Support Vector Machines. Kalev Kask
Machine Learning and Data Mining Support Vector Machines Kalev Kask Linear classifiers Which decision boundary is better? Both have zero training error (perfect training accuracy) But, one of them seems
More informationSupport'Vector'Machines. Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan
Support'Vector'Machines Machine(Learning(Spring(2018 March(5(2018 Kasthuri Kannan kasthuri.kannan@nyumc.org Overview Support Vector Machines for Classification Linear Discrimination Nonlinear Discrimination
More informationNeural networks and optimization
Neural networks and optimization Nicolas Le Roux INRIA 8 Nov 2011 Nicolas Le Roux (INRIA) Neural networks and optimization 8 Nov 2011 1 / 80 1 Introduction 2 Linear classifier 3 Convolutional neural networks
More informationSVMs, Duality and the Kernel Trick
SVMs, Duality and the Kernel Trick Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 26 th, 2007 2005-2007 Carlos Guestrin 1 SVMs reminder 2005-2007 Carlos Guestrin 2 Today
More informationSupport Vector Machines
Support Vector Machines Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support Overview Motivation
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationIntroduction to SVM and RVM
Introduction to SVM and RVM Machine Learning Seminar HUS HVL UIB Yushu Li, UIB Overview Support vector machine SVM First introduced by Vapnik, et al. 1992 Several literature and wide applications Relevance
More informationPerceptron Revisited: Linear Separators. Support Vector Machines
Support Vector Machines Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b > 0 w T x + b = 0 w T x + b < 0 Department
More informationMachine Learning : Support Vector Machines
Machine Learning Support Vector Machines 05/01/2014 Machine Learning : Support Vector Machines Linear Classifiers (recap) A building block for almost all a mapping, a partitioning of the input space into
More informationSupport Vector Machines
Two SVM tutorials linked in class website (please, read both): High-level presentation with applications (Hearst 1998) Detailed tutorial (Burges 1998) Support Vector Machines Machine Learning 10701/15781
More informationStatistical learning theory, Support vector machines, and Bioinformatics
1 Statistical learning theory, Support vector machines, and Bioinformatics Jean-Philippe.Vert@mines.org Ecole des Mines de Paris Computational Biology group ENS Paris, november 25, 2003. 2 Overview 1.
More informationESS2222. Lecture 4 Linear model
ESS2222 Lecture 4 Linear model Hosein Shahnas University of Toronto, Department of Earth Sciences, 1 Outline Logistic Regression Predicting Continuous Target Variables Support Vector Machine (Some Details)
More informationMachine Learning Basics Lecture 4: SVM I. Princeton University COS 495 Instructor: Yingyu Liang
Machine Learning Basics Lecture 4: SVM I Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation Given training data x i, y i : 1 i n i.i.d. from distribution
More informationLecture Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2
BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lectre Notes On THEORY OF COMPUTATION MODULE - 2 UNIT - 2 Prepared by, Dr. Sbhend Kmar Rath, BPUT, Odisha. Tring Machine- Miscellany UNIT 2 TURING MACHINE
More informationSupport Vector Machine (continued)
Support Vector Machine continued) Overlapping class distribution: In practice the class-conditional distributions may overlap, so that the training data points are no longer linearly separable. We need
More informationLinear System Theory (Fall 2011): Homework 1. Solutions
Linear System Theory (Fall 20): Homework Soltions De Sep. 29, 20 Exercise (C.T. Chen: Ex.3-8). Consider a linear system with inpt and otpt y. Three experiments are performed on this system sing the inpts
More informationLinear, threshold units. Linear Discriminant Functions and Support Vector Machines. Biometrics CSE 190 Lecture 11. X i : inputs W i : weights
Linear Discriminant Functions and Support Vector Machines Linear, threshold units CSE19, Winter 11 Biometrics CSE 19 Lecture 11 1 X i : inputs W i : weights θ : threshold 3 4 5 1 6 7 Courtesy of University
More informationPAC-learning, VC Dimension and Margin-based Bounds
More details: General: http://www.learning-with-kernels.org/ Example of more complex bounds: http://www.research.ibm.com/people/t/tzhang/papers/jmlr02_cover.ps.gz PAC-learning, VC Dimension and Margin-based
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationNeural networks and optimization
Neural networks and optimization Nicolas Le Roux Criteo 18/05/15 Nicolas Le Roux (Criteo) Neural networks and optimization 18/05/15 1 / 85 1 Introduction 2 Deep networks 3 Optimization 4 Convolutional
More informationIntroduction to Logistic Regression and Support Vector Machine
Introduction to Logistic Regression and Support Vector Machine guest lecturer: Ming-Wei Chang CS 446 Fall, 2009 () / 25 Fall, 2009 / 25 Before we start () 2 / 25 Fall, 2009 2 / 25 Before we start Feel
More informationCSE446: non-parametric methods Spring 2017
CSE446: non-parametric methods Spring 2017 Ali Farhadi Slides adapted from Carlos Guestrin and Luke Zettlemoyer Linear Regression: What can go wrong? What do we do if the bias is too strong? Might want
More informationLecture 8: September 26
10-704: Information Processing and Learning Fall 2016 Lectrer: Aarti Singh Lectre 8: September 26 Note: These notes are based on scribed notes from Spring15 offering of this corse. LaTeX template cortesy
More informationLecture Notes: Finite Element Analysis, J.E. Akin, Rice University
9. TRUSS ANALYSIS... 1 9.1 PLANAR TRUSS... 1 9. SPACE TRUSS... 11 9.3 SUMMARY... 1 9.4 EXERCISES... 15 9. Trss analysis 9.1 Planar trss: The differential eqation for the eqilibrim of an elastic bar (above)
More informationVC dimension, Model Selection and Performance Assessment for SVM and Other Machine Learning Algorithms
03/Feb/2010 VC dimension, Model Selection and Performance Assessment for SVM and Other Machine Learning Algorithms Presented by Andriy Temko Department of Electrical and Electronic Engineering Page 2 of
More informationFormal Methods for Deriving Element Equations
Formal Methods for Deriving Element Eqations And the importance of Shape Fnctions Formal Methods In previos lectres we obtained a bar element s stiffness eqations sing the Direct Method to obtain eact
More informationDan Roth 461C, 3401 Walnut
CIS 519/419 Applied Machine Learning www.seas.upenn.edu/~cis519 Dan Roth danroth@seas.upenn.edu http://www.cis.upenn.edu/~danroth/ 461C, 3401 Walnut Slides were created by Dan Roth (for CIS519/419 at Penn
More informationCDS 110b: Lecture 1-2 Introduction to Optimal Control
CDS 110b: Lectre 1-2 Introdction to Optimal Control Richard M. Mrray 4 Janary 2006 Goals: Introdce the problem of optimal control as method of trajectory generation State the maimm principle and give eamples
More informationProblem Class 4. More State Machines (Problem Sheet 3 con t)
Problem Class 4 More State Machines (Problem Sheet 3 con t) Peter Cheng Department of Electrical & Electronic Engineering Imperial College London URL: www.ee.imperial.ac.k/pcheng/ee2_digital/ E-mail: p.cheng@imperial.ac.k
More informationUNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL
8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-1 April 01, Tallinn, Estonia UNCERTAINTY FOCUSED STRENGTH ANALYSIS MODEL Põdra, P. & Laaneots, R. Abstract: Strength analysis is a
More informationSupport Vector Machines
Support Vector Machines INFO-4604, Applied Machine Learning University of Colorado Boulder September 28, 2017 Prof. Michael Paul Today Two important concepts: Margins Kernels Large Margin Classification
More informationLinear and Nonlinear Model Predictive Control of Quadruple Tank Process
Linear and Nonlinear Model Predictive Control of Qadrple Tank Process P.Srinivasarao Research scholar Dr.M.G.R.University Chennai, India P.Sbbaiah, PhD. Prof of Dhanalaxmi college of Engineering Thambaram
More informationSupport vector machines Lecture 4
Support vector machines Lecture 4 David Sontag New York University Slides adapted from Luke Zettlemoyer, Vibhav Gogate, and Carlos Guestrin Q: What does the Perceptron mistake bound tell us? Theorem: The
More informationAnnouncements - Homework
Announcements - Homework Homework 1 is graded, please collect at end of lecture Homework 2 due today Homework 3 out soon (watch email) Ques 1 midterm review HW1 score distribution 40 HW1 total score 35
More informationThe Linear Quadratic Regulator
10 The Linear Qadratic Reglator 10.1 Problem formlation This chapter concerns optimal control of dynamical systems. Most of this development concerns linear models with a particlarly simple notion of optimality.
More informationNonlinear Classification
Nonlinear Classification INFO-4604, Applied Machine Learning University of Colorado Boulder October 5-10, 2017 Prof. Michael Paul Linear Classification Most classifiers we ve seen use linear functions
More informationSupport Vector Machines and Kernel Methods
Support Vector Machines and Kernel Methods Geoff Gordon ggordon@cs.cmu.edu July 10, 2003 Overview Why do people care about SVMs? Classification problems SVMs often produce good results over a wide range
More informationKernels. Machine Learning CSE446 Carlos Guestrin University of Washington. October 28, Carlos Guestrin
Kernels Machine Learning CSE446 Carlos Guestrin University of Washington October 28, 2013 Carlos Guestrin 2005-2013 1 Linear Separability: More formally, Using Margin Data linearly separable, if there
More informationML (cont.): SUPPORT VECTOR MACHINES
ML (cont.): SUPPORT VECTOR MACHINES CS540 Bryan R Gibson University of Wisconsin-Madison Slides adapted from those used by Prof. Jerry Zhu, CS540-1 1 / 40 Support Vector Machines (SVMs) The No-Math Version
More informationMining of Massive Datasets Jure Leskovec, AnandRajaraman, Jeff Ullman Stanford University
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit
More informationDiscriminative Models
No.5 Discriminative Models Hui Jiang Department of Electrical Engineering and Computer Science Lassonde School of Engineering York University, Toronto, Canada Outline Generative vs. Discriminative models
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationSupport Vector Machines. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar
Data Mining Support Vector Machines Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar 02/03/2018 Introduction to Data Mining 1 Support Vector Machines Find a linear hyperplane
More informationKernelized Perceptron Support Vector Machines
Kernelized Perceptron Support Vector Machines Emily Fox University of Washington February 13, 2017 What is the perceptron optimizing? 1 The perceptron algorithm [Rosenblatt 58, 62] Classification setting:
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationSupport Vector Machines. Machine Learning Fall 2017
Support Vector Machines Machine Learning Fall 2017 1 Where are we? Learning algorithms Decision Trees Perceptron AdaBoost 2 Where are we? Learning algorithms Decision Trees Perceptron AdaBoost Produce
More informationNeural Networks and Deep Learning
Neural Networks and Deep Learning Professor Ameet Talwalkar November 12, 2015 Professor Ameet Talwalkar Neural Networks and Deep Learning November 12, 2015 1 / 16 Outline 1 Review of last lecture AdaBoost
More informationClassifier Complexity and Support Vector Classifiers
Classifier Complexity and Support Vector Classifiers Feature 2 6 4 2 0 2 4 6 8 RBF kernel 10 10 8 6 4 2 0 2 4 6 Feature 1 David M.J. Tax Pattern Recognition Laboratory Delft University of Technology D.M.J.Tax@tudelft.nl
More informationFinal Exam, Fall 2002
15-781 Final Exam, Fall 22 1. Write your name and your andrew email address below. Name: Andrew ID: 2. There should be 17 pages in this exam (excluding this cover sheet). 3. If you need more room to work
More informationTraining the linear classifier
215, Training the linear classifier A natural way to train the classifier is to minimize the number of classification errors on the training data, i.e. choosing w so that the training error is minimized.
More informationDiscriminative Models
No.5 Discriminative Models Hui Jiang Department of Electrical Engineering and Computer Science Lassonde School of Engineering York University, Toronto, Canada Outline Generative vs. Discriminative models
More informationCSC242: Intro to AI. Lecture 21
CSC242: Intro to AI Lecture 21 Administrivia Project 4 (homeworks 18 & 19) due Mon Apr 16 11:59PM Posters Apr 24 and 26 You need an idea! You need to present it nicely on 2-wide by 4-high landscape pages
More information1. Tractable and Intractable Computational Problems So far in the course we have seen many problems that have polynomial-time solutions; that is, on
. Tractable and Intractable Comptational Problems So far in the corse we have seen many problems that have polynomial-time soltions; that is, on a problem instance of size n, the rnning time T (n) = O(n
More informationBIOSTATISTICAL METHODS
BIOSTATISTICAL METHOS FOR TRANSLATIONAL & CLINICAL RESEARCH ROC Crve: IAGNOSTIC MEICINE iagnostic tests have been presented as alwas having dichotomos otcomes. In some cases, the reslt of the test ma be
More informationSupport Vector Machines (SVM) in bioinformatics. Day 1: Introduction to SVM
1 Support Vector Machines (SVM) in bioinformatics Day 1: Introduction to SVM Jean-Philippe Vert Bioinformatics Center, Kyoto University, Japan Jean-Philippe.Vert@mines.org Human Genome Center, University
More informationCS325 Artificial Intelligence Chs. 18 & 4 Supervised Machine Learning (cont)
CS325 Artificial Intelligence Cengiz Spring 2013 Model Complexity in Learning f(x) x Model Complexity in Learning f(x) x Let s start with the linear case... Linear Regression Linear Regression price =
More informationCMU-Q Lecture 24:
CMU-Q 15-381 Lecture 24: Supervised Learning 2 Teacher: Gianni A. Di Caro SUPERVISED LEARNING Hypotheses space Hypothesis function Labeled Given Errors Performance criteria Given a collection of input
More informationLecture 3. (2) Last time: 3D space. The dot product. Dan Nichols January 30, 2018
Lectre 3 The dot prodct Dan Nichols nichols@math.mass.ed MATH 33, Spring 018 Uniersity of Massachsetts Janary 30, 018 () Last time: 3D space Right-hand rle, the three coordinate planes 3D coordinate system:
More informationDecision Trees. Machine Learning CSEP546 Carlos Guestrin University of Washington. February 3, 2014
Decision Trees Machine Learning CSEP546 Carlos Guestrin University of Washington February 3, 2014 17 Linear separability n A dataset is linearly separable iff there exists a separating hyperplane: Exists
More informationMax Margin-Classifier
Max Margin-Classifier Oliver Schulte - CMPT 726 Bishop PRML Ch. 7 Outline Maximum Margin Criterion Math Maximizing the Margin Non-Separable Data Kernels and Non-linear Mappings Where does the maximization
More informationAn Investigation into Estimating Type B Degrees of Freedom
An Investigation into Estimating Type B Degrees of H. Castrp President, Integrated Sciences Grop Jne, 00 Backgrond The degrees of freedom associated with an ncertainty estimate qantifies the amont of information
More informationInstruction register. Data. Registers. Register # Memory data register
Where we are headed Single Cycle Problems: what if we had a more complicated instrction like floating point? wastefl of area One Soltion: se a smaller cycle time have different instrctions take different
More informationChapter 9. Support Vector Machine. Yongdai Kim Seoul National University
Chapter 9. Support Vector Machine Yongdai Kim Seoul National University 1. Introduction Support Vector Machine (SVM) is a classification method developed by Vapnik (1996). It is thought that SVM improved
More informationSection 7.4: Integration of Rational Functions by Partial Fractions
Section 7.4: Integration of Rational Fnctions by Partial Fractions This is abot as complicated as it gets. The Method of Partial Fractions Ecept for a few very special cases, crrently we have no way to
More informationLinear Models for Classification
Linear Models for Classification Oliver Schulte - CMPT 726 Bishop PRML Ch. 4 Classification: Hand-written Digit Recognition CHINE INTELLIGENCE, VOL. 24, NO. 24, APRIL 2002 x i = t i = (0, 0, 0, 1, 0, 0,
More informationSupport Vector Machines. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
Support Vector Machines CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 A Linearly Separable Problem Consider the binary classification
More informationNeural Network Learning: Testing Bounds on Sample Complexity
Neural Network Learning: Testing Bounds on Sample Complexity Joaquim Marques de Sá, Fernando Sereno 2, Luís Alexandre 3 INEB Instituto de Engenharia Biomédica Faculdade de Engenharia da Universidade do
More informationContent. Learning. Regression vs Classification. Regression a.k.a. function approximation and Classification a.k.a. pattern recognition
Content Andrew Kusiak Intelligent Systems Laboratory 239 Seamans Center The University of Iowa Iowa City, IA 52242-527 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak Introduction to learning
More informationLogistic Regression. Robot Image Credit: Viktoriya Sukhanova 123RF.com
Logistic Regression These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these
More informationNon-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines
Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2018 CS 551, Fall
More informationOptimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications
Optimization via the Hamilton-Jacobi-Bellman Method: Theory and Applications Navin Khaneja lectre notes taken by Christiane Koch Jne 24, 29 1 Variation yields a classical Hamiltonian system Sppose that
More informationJeff Howbert Introduction to Machine Learning Winter
Classification / Regression Support Vector Machines Jeff Howbert Introduction to Machine Learning Winter 2012 1 Topics SVM classifiers for linearly separable classes SVM classifiers for non-linearly separable
More informationCOMP 652: Machine Learning. Lecture 12. COMP Lecture 12 1 / 37
COMP 652: Machine Learning Lecture 12 COMP 652 Lecture 12 1 / 37 Today Perceptrons Definition Perceptron learning rule Convergence (Linear) support vector machines Margin & max margin classifier Formulation
More information