UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.
|
|
- Corey Morrison
- 5 years ago
- Views:
Transcription
1 UVA CS / Introduc8on to Machne Learnng and Data Mnng Lecture 10: Classfca8on wth Support Vector Machne (cont. ) Yanjun Q / Jane Unversty of Vrgna Department of Computer Scence 9/6/14 1 Where we are? è Fve major seclons of ths course q Regresson (supervsed) q ClassfcaLon (supervsed) q Unsupervsed models q Learnng theory q Graphcal models 9/6/14 1
2 Where we are? è Three major seclons for classfcalon We can dvde the large varety of classfcaton approaches nto roughly three major types 1. Dscrmnatve - drectly estmate a decson rule/boundary - e.g., support vector machne, decson tree. Generatve: - buld a generatve statstcal model - e.g., Bayesan networks 3. Instance based classfers - Use observaton drectly (no models) - e.g. K nearest neghbors 9/6/14 3 Today Last Lecture q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 4
3 Today revew q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 5 Hstory of SVM Young / theorelcally sound / SVM s nspred from stalslcal learnng theory [3] Impaceul SVM was frst ntroduced n 199 [1] SVM becomes popular because of ts success n handwr_en dgt recognlon 1.1% test error rate for SVM. Ths s the same as the error rates of a carefully constructed neural network, LeNet 4. See SecLon 5.11 n [] or the dscusson n [3] for detals SVM s now regarded as an mportant example of kernel methods, arguably the ho_est area n machne learnng 10 years ago [1] B.E. Boser et al. A Tranng Algorthm for Optmal Margn Classfers. Proceedngs of the Ffth Annual Workshop on Computatonal Learnng Theory , Pttsburgh, 199. [] L. Bottou et al. Comparson of classfer methods: a case study n handwrtten dgt recognton. Proceedngs of the 1th IAPR Internatonal Conference on Pattern Recognton, vol., pp. 77-8, [3] V. Vapnk. The Nature of Statstcal Learnng Theory. nd edton, Sprnger, /6/14 6 3
4 ApplcaLons of SVMs Computer Vson Text CategorzaLon Rankng (e.g., Google searches) Handwr_en Character RecognLon Tme seres analyss BonformaLcs. à Lots of very successful applcalons!!! 9/6/14 7 Handwr_en dgt recognlon 1999, SVM 9/6/14 8 4
5 Today revew q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary 9/6/14 9 A Dataset for bnary classfcalon Output as Bnary Class Label: 1 or -1 Data/ponts/nstances/examples/samples/records: [ rows ] Features/a0rbutes/dmensons/ndependent varables/covarates/ predctors/regressors: [ columns, except the last] Target/outcome/response/label/dependent varable: specal 9/5/14 column to be predcted [ last column ] 10 5
6 Max margn classfers Instead of fttng all ponts, focus on boundary ponts Learn a boundary that leads to the largest margn from ponts on both sdes x Why? Intutve, makes sense Some theoretcal support Works well n practce 9/5/14 x 11 1 Max- margn & Decson Boundary The decson boundary should be as far away from the data of both classes as possble Class 1 W s a p-dm vector; b s a scalar Class -1 9/6/14 1 6
7 Today revew q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 13 Maxmzng the margn: observaton-1 Observa8on 1: the vector w s orthogonal to the +1 plane Class Class 1 M 9/6/
8 w T x+b=+1 w T x+b=0 w T x+b=-1 Maxmzng the margn: observaton- Predct class +1 Predct class -1 M Classfy as +1 f w T x+b 1 Classfy as -1 f w T x+b - 1 Undefned f -1 <w T x+b < 1 Observaton 1: the vector w s orthogonal to the +1 and -1 planes Observaton : f x + s a pont on the +1 plane and x - s the closest pont to x + on the -1 plane then x + = λw + x - Snce w s orthogonal to both planes we need to travel some dstance along w to get from x + to x - 9/6/14 15 Predct class +1 Puttng t together M w T x+b=+1 w T x+b=0 w T x+b=-1 w T x + + b = +1 w T x - + b = -1 x + = λw + x - x + - x - = M Predct class -1 We can now defne M n terms of w and b w T x + + b = +1 w T (λw + x - ) + b = +1 w T x - + b + λw T w = λw T w = +1 λ = /w T w 9/6/
9 w T x+b=+1 w T x+b=0 w T x+b=-1 w T x + + b = +1 w T x - + b = -1 x + = λw + x - x + - x - = M λ = /w T w Predct class +1 Puttng t together Predct class -1 We can now defne M n terms of w and b M M = x + - x - M = λw = λ w = λ M = w T w w T w = w T w w T w 9/6/14 17 Today revew q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/
10 Optmzaton Step.e. learnng optmal parameter for SVM Predct class +1 M M = w T w w T x+b=+1 w T x+b=0 w T x+b=-1 Predct class -1 Mn (w T w)/ subject to the followng constrants: For all x n class + 1 w T x+b 1 For all x n class - 1 w T x+b -1 } A total of n constrants f we have n nput samples argmn w,b p w =1 9/6/14 19 subject to x Dtran : y ( x w + b) 1 SVM as a QP problem w T x+b=+1 w T x+b=0 w T x+b=-1 Predct class +1 Predct class -1 M M = w T w R as I matrx, d as zero vector, c as 0 value mn U u T Ru + d T u + c subject to n nequalty constrants: a 11 u 1 + a 1 u +... b 1!!! Mn (w T w)/ subject to the followng nequalty constrants: For all x n class + 1 w T x+b 1 For all x n class - 1 w T x+b -1 } A total of n constrants f we have n nput samples a n1 u 1 + a n u +... b n and k equvalency constrants: a n +1,1 u 1 + a n +1, u +... = b n +1!!! a n +k,1 u 1 + a n +k, u +... = b n +k 9/6/
11 Today revew q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 1 Non lnearly separable case Instead of mnmzng the number of msclassfed ponts we can mnmze the dstance between these ponts and ther correct plane +1 plane -1 plane The new optmzaton problem s: n w T w mn w + Cε =1 subject to the followng nequalty constrants: For all x n class + 1 ε k ε j w T x+b 1- ε For all x n class - 1 w T x+b -1+ ε }A total of n constrants For all ε I 0 } Another n constrants 9/6/14 11
12 Where we are Two optmzaton problems: For the separable and non separable cases w T n w T w w mn mn w + Cε w =1 For all x n class + 1 For all x n class + 1 w T x+b 1 w T x+b 1- ε For all x For all x n class - 1 n class - 1 w T w x+b -1 T x+b -1+ ε For all ε I 0 9/6/14 3 Today q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 4 1
13 Where we are Two optmzaton problems: For the separable and non separable cases w T n w Mn (w T mn w)/ w + Cε =1 For all x n class + 1 For all x n class + 1 w T x+b 1 w T x+b 1- ε For all x For all x n class - 1 n class - 1 w T w x+b -1 T x+b -1+ ε For all ε I 0 Instead of solvng these QPs drectly we wll solve a dual formulaton of the SVM optmzaton problem The man reason for swtchng to ths type of representaton s that t would allow us to use a neat trck that wll make our lves easer (and the run tme faster) 9/6/14 5 Optmzaton Revew: Constraned OpLmzaLon mn u u s.t. u b Allowed mn Case 1: b Global mn Allowed mn Case : b Global mn 9/6/
14 Optmzaton Revew: Constraned OpLmzaLon wth Lagrange When equal constrants è oplmze f(x), subject to g (x)=0 Method of Lagrange mullplers: convert to a hgher- dmensonal problem Mnmze w.r.t. f ( x) + g ( x) λ ( x1 x n 1 k ; λ λ ) Introducng a Lagrange mullpler for each constrant Construct the Lagrangan for the orgnal oplmzalon problem 7 Optmzaton Revew: Dual Problem Usng dual problem Constraned oplmzalon à unconstraned oplmzalon Need to change maxmzalon to mnmzalon Only vald when the orgnal oplmzalon problem s convex/concave (strong dualty) x*=λ* When convex/concave Dual Problem * λ = arg mn l( λ) x * λ Prmal Problem = arg max f( x) x subject to gx ( ) = c l(λ) = sup( f (x) + λ(g(x) c)) x 14
15 An alternatve (dual) representaton of the SVM QP We wll start wth the lnearly separable case Instead of encodng the correct classfcaton rule and constrant we wll use LaGrange multples to encode t as part of the our mnmzaton problem Mn (w T w)/ For all x n class +1 w T x+b 1 For all x n class -1 w T x+b -1 Why? Mn (w T w)/ (w T x +b)y 1 9/6/14 9 An alternatve (dual) representaton of the SVM QP We wll start wth the lnearly separable case Instead of encodng the correct classfcaton rule a constrant we wll use Lagrange multples to encode t as part of the our mnmzaton problem Recall that Lagrange multplers can be appled to turn the followng problem: mn x x s.t. x b To Mn x,α x +α(b-x) s.t. α 0 b- x 0 mn x max α x - α(x- b) Mn (w T w)/ (w T x +b)y 1 b Allowed mn Global mn 9/6/
16 Lagrange multpler for SVMs Dual formulaton w T w mn w,b max α α [(w T x + b)y 1] α 0 Usng ths new formulaton we can derve w and b by takng the dervatve w.r.t. w and α leadng to: w = α x y b = y w T x for s.t. α > 0 Set partal dervatves to 0 Orgnal formulaton Mn (w T w)/ (w T x +b)y 1 Fnally, takng the dervatve w.r.t. b we get: α y = 0 9/6/14 31 Dual SVM - nterpretaton w = α x y For α s that are not 0, no nfluence 9/6/
17 A Geometrcal InterpretaLon α 8 =0.6 α 10 =0 α 5 =0 α 4 =0 α 9 =0 α 3 =0 α 6 =1.4 α 7 =0 α =0 α 1 =0.8 9/6/14 33 Dual SVM for lnearly separable case Substtutng w nto our target functon and usng the addtonal constrant we get: Dual formulaton max α α 1 α y = 0 α 0,j α α j y y j x T x j mn w,b w T w α 0 w = α x y b = y w T x for s.t. α > 0 α y = 0 α [(w T x + b)y 1] 9/6/
18 Dual SVM for lnearly separable case Our dual target functon: max α α 1 α y = 0 α 0 To evaluate a new sample x j we need to compute: w T x j + b =,j α α j y y j x T x j α y x T x j + b Is ths too much computatonal work (for example when usng transformaton of the data)? Dot product for all tranng samples Dot product wth tranng samples 9/6/14 35 Dual formulaton for non lnearly separable case Dual target functon: max α α 1 α y = 0 C > α 0,,j α α j y y j x T x j Hyperparameter C should be tuned through k- folds CV The only dfference s that the α I s are now bounded To evaluate a new sample x j we need to compute: w T x j + b = α y x T x j + b Ths s very smlar to the oplmzalon problem n the lnear separable case, except that there s an upper bound C on α now Once agan, a QP solver can be used to fnd α 9/6/
19 Today q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 37 Classfyng n 1-d Can an SVM correctly classfy ths data? What about ths? X X 9/6/
20 Classfyng n 1-d Can an SVM correctly classfy ths data? And now? (extend wth polynomal bass ) X X X 9/6/14 39 Non- lnear SVMs: D The orgnal nput space (x) can be mapped to some hgher- dmensonal feature space (φ(x) )where the tranng set s separable: x=(x 1,x ) φ(x) =(x 1,x, x 1 x ) x 1 x Φ: x φ(x) x x 1 9/6/14 40 Ths slde s courtesy of 0
21 Non- lnear SVMs: D The orgnal nput space (x) can be mapped to some hgher- dmensonal feature space (φ(x) )where the tranng set s separable: x=(x 1,x ) φ(x) =(x 1,x, x 1 x ) x 1 x If data s mapped nto suffcently hgh dmenson, then samples wll n general Φ: x φ(x) be lnearly separable; N data ponts are n general separable n a space of N-1 dmensons or more!!! x x 1 9/6/14 41 Ths slde s courtesy of A lttle bt theory: Vapnk-Chervonenks (VC) dmenson If data s mapped nto suffcently hgh dmenson, then samples wll n general be lnearly separable; N data ponts are n general separable n a space of N-1 dmensons or more!!! VC dmenson of the set of orented lnes n R s 3 It can be shown that the VC dmenson of the famly of orented separalng hyperplanes n R N s at least N+1 9/6/14 4 1
22 Transformaton of Inputs Possble problems - Hgh computaton burden due to hgh-dmensonalty - Many more parameters SVM solves these two ssues smultaneously Kernel trcks for effcent computaton Dual formulaton only assgns parameters to samples, not features Input space φ(.) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) φ( ) Feature space 9/6/14 43 Quadratc kernels Whle workng n hgher dmensons s benefcal, t also ncreases our runnng tme because of the dot product computaton However, there s a neat trck we can use max α α α α j y y j Φ(x ) T Φ(x j ),j α y = 0 α 0 consder all quadratc terms for x 1, x x m The term wll become clear n the next slde Φ(x) = 1 x 1! x m x 1! x m m+1 lnear terms m quadratc terms m s the number of features n each vector x 1 x! m(m-1)/ parwse terms x m 1 x m 9/6/14 44
23 Dot product for quadratc kernels How many operatons do we need for the dot product? 1 1 x 1! z 1! Φ(x) T Φ(z) = x m x 1! x m. z m z 1! z m = x z + x z + x x j z z j +1 j= +1 m m m(m-1)/ =~ m x 1 x! z 1 z! x m 1 x m z m 1 z m 9/6/14 45 The kernel trck How many operatons do we need for the dot product? Φ(x) T Φ(z) = x z + x z + x x j z z j +1 j= +1 m m m(m-1)/ =~ m However, we can obtan dramatc savngs by notng that Φ(x) T Φ(z) = (x T z +1) = (x.z +1) = (x.z) + (x.z)+1 = ( x z ) + x z +1 We only need m operatons! = x z + x z + x x j z z j +1 9/6/14 46 j=+1 So, f we defne the kernel func8on as follows, there s no need to carry out φ(.) explctly K(x, z) = (x T z +1) 3
24 Where we are Our dual target functon: max α α 1 α α j y y j Φ(x ) T Φ(x j ) α y = 0 α 0,j mn operatons at each teraton To evaluate a new sample x j we need to compute: w T Φ(x j )+ b = mr operatons where r are the number of support vectors (α >0) α y Φ(x ) T Φ(x j )+ b So, f we defne the kernel func8on as follows, there s no need to carry out φ(.) explctly K(x, z) = (x T z +1) 9/6/14 47 Erc CMU, More examples of kernel funclons Lnear kernel (we've seen t) T K ( x, x') = x x' Polynomal kernel (we just saw an example) T ( x ') p K ( x, x') = 1 + x where p =, 3, To get the feature vectors we concatenate all pth order polynomal terms of the components of x (weghted approprately) Radal bass kernel 1 K( x, x') = exp x x' In ths case the feature space conssts of funclons and results n a non- parametrc classfer. Never represent features explctly Compute dot products n closed form Very ntereslng theory Reproducng Kernel Hlbert Spaces Not covered n detal here 48 4
25 Today q Support Vector Machne (SVM) ü Hstory of SVM ü Large Margn Lnear Classfer ü Defne Margn (M) n terms of model parameter ü OpLmzaLon to learn model parameters (w, b) ü Non lnearly separable case ü OpLmzaLon wth dual form ü Nonlnear decson boundary ü MulLclass SVM 9/6/14 49 Mult-class classfcaton wth SVMs What f we have data from more than two classes? Most common soluton: One vs. all - create a classfer for each class aganst all other data - for a new pont use all classfers and compare the margn for all selected classes Note that ths s not necessarly vald snce ths s not what we traned the SVM for, but often works well n practce 9/6/
26 Handwrtten dgt recognton 1999, SVM 9/6/14 51 Why do SVMs work? If we are usng huge features spaces (wth kernels) how come we are not overfttng the data? - Number of parameters remans the same (and most are set to 0) - Whle we have a lot of nput values, at the end we only care about the support vectors and these are usually a small group of samples - The mnmzaton (or the maxmzng of the margn) functon acts as a sort of regularzaton term leadng to reduced overfttng 9/6/14 5 6
27 Software A lst of SVM mplementaton can be found at Some mplementaton (such as LIBSVM) can handle mult-class classfcaton SVMLght s among one of the earlest mplementaton of SVM Several Matlab toolboxes for SVM are also avalable 9/6/14 53 References Bg thanks to Prof. Zv Bar- CMU for allowng me to reuse some of hs sldes Prof. Andrew CMU s sldes Elements of StaLsLcal Learnng, by HasLe, Tbshran and Fredman 9/18/
UVA CS / Introduc8on to Machine Learning and Data Mining
UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 11: Classfca8on wth Support Vector Machne (Revew + Prac8cal Gude) Yanjun Q / Jane Unversty of Vrgna Department of Computer
More informationSVMs: Duality and Kernel Trick. SVMs as quadratic programs
11/17/9 SVMs: Dualt and Kernel rck Machne Learnng - 161 Geoff Gordon MroslavDudík [[[partl ased on sldes of Zv-Bar Joseph] http://.cs.cmu.edu/~ggordon/161/ Novemer 18 9 SVMs as quadratc programs o optmzaton
More informationSupport Vector Machines
Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class
More informationSupport Vector Machines
/14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationSVMs: Duality and Kernel Trick. SVMs as quadratic programs
/8/9 SVMs: Dualt and Kernel rck Machne Learnng - 6 Geoff Gordon MroslavDudík [[[partl ased on sldes of Zv-Bar Joseph] http://.cs.cmu.edu/~ggordon/6/ Novemer 8 9 SVMs as quadratc programs o optmzaton prolems:
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationSupport Vector Machines
CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationLecture 3: Dual problems and Kernels
Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationLagrange Multipliers Kernel Trick
Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x
More informationSupport Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012
Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far
More informationImage classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?
Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of
More informationKristin P. Bennett. Rensselaer Polytechnic Institute
Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal
More informationIntro to Visual Recognition
CS 2770: Computer Vson Intro to Vsual Recognton Prof. Adrana Kovashka Unversty of Pttsburgh February 13, 2018 Plan for today What s recognton? a.k.a. classfcaton, categorzaton Support vector machnes Separable
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationNonlinear Classifiers II
Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationChapter 6 Support vector machine. Séparateurs à vaste marge
Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé
More informationUVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 9: Classifica8on with Support Vector Machine (cont.
UVA CS 4501-001 / 6501 007 Introduc8on to Machine Learning and Data Mining Lecture 9: Classifica8on with Support Vector Machine (cont.) Yanjun Qi / Jane University of Virginia Department of Computer Science
More information18-660: Numerical Methods for Engineering Design and Optimization
8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal
More informationAdvanced Introduction to Machine Learning
Advanced Introducton to Machne Learnng 10715, Fall 2014 The Kernel Trck, Reproducng Kernel Hlbert Space, and the Representer Theorem Erc Xng Lecture 6, September 24, 2014 Readng: Erc Xng @ CMU, 2014 1
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationCSE 252C: Computer Vision III
CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel
More informationKernel Methods and SVMs
Statstcal Machne Learnng Notes 7 Instructor: Justn Domke Kernel Methods and SVMs Contents 1 Introducton 2 2 Kernel Rdge Regresson 2 3 The Kernel Trck 5 4 Support Vector Machnes 7 5 Examples 1 6 Kernel
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationINF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018
INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationRecap: the SVM problem
Machne Learnng 0-70/5-78 78 Fall 0 Advanced topcs n Ma-Margn Margn Learnng Erc Xng Lecture 0 Noveber 0 Erc Xng @ CMU 006-00 Recap: the SVM proble We solve the follong constraned opt proble: a s.t. J 0
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationEvaluation of classifiers MLPs
Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label
More informationLecture 6: Support Vector Machines
Lecture 6: Support Vector Machnes Marna Melă mmp@stat.washngton.edu Department of Statstcs Unversty of Washngton November, 2018 Lnear SVM s The margn and the expected classfcaton error Maxmum Margn Lnear
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More information17 Support Vector Machines
17 We now dscuss an nfluental and effectve classfcaton algorthm called (SVMs). In addton to ther successes n many classfcaton problems, SVMs are responsble for ntroducng and/or popularzng several mportant
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationThe Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD
he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs246.stanford.edu 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 2 Hgh dm. data Graph data Infnte
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationThe exam is closed book, closed notes except your one-page cheat sheet.
CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationPairwise Multi-classification Support Vector Machines: Quadratic Programming (QP-P A MSVM) formulations
Proceedngs of the 6th WSEAS Int. Conf. on NEURAL NEWORKS, Lsbon, Portugal, June 6-8, 005 (pp05-0) Parwse Mult-classfcaton Support Vector Machnes: Quadratc Programmng (QP-P A MSVM) formulatons HEODORE B.
More informationCSE 546 Midterm Exam, Fall 2014(with Solution)
CSE 546 Mdterm Exam, Fall 014(wth Soluton) 1. Personal nfo: Name: UW NetID: Student ID:. There should be 14 numbered pages n ths exam (ncludng ths cover sheet). 3. You can use any materal you brought:
More informationUVA$CS$6316$$ $Fall$2015$Graduate:$$ Machine$Learning$$ $ $Lecture$9:$Support$Vector$Machine$ (Cont.$Revised$Advanced$Version)$
9/30/15 UVACS6316 Fall015Graduate: MachneLearnng Lecture9:SupportVectorMachne (Cont.RevsedAdvancedVerson) Dr.YanjunQ UnverstyofVrgna Departmentof ComputerScence ATT: there exst some nconsstency of math
More informationMath 217 Fall 2013 Homework 2 Solutions
Math 17 Fall 013 Homework Solutons Due Thursday Sept. 6, 013 5pm Ths homework conssts of 6 problems of 5 ponts each. The total s 30. You need to fully justfy your answer prove that your functon ndeed has
More informationFMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu
FMA901F: Machne Learnng Lecture 5: Support Vector Machnes Crstan Smnchsescu Back to Bnary Classfcaton Setup We are gven a fnte, possbly nosy, set of tranng data:,, 1,..,. Each nput s pared wth a bnary
More informationInstance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification
Instance-Based earnng (a.k.a. memory-based learnng) Part I: Nearest Neghbor Classfcaton Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationp 1 c 2 + p 2 c 2 + p 3 c p m c 2
Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationStatistical machine learning and its application to neonatal seizure detection
19/Oct/2009 Statstcal machne learnng and ts applcaton to neonatal sezure detecton Presented by Andry Temko Department of Electrcal and Electronc Engneerng Page 2 of 42 A. Temko, Statstcal Machne Learnng
More informationLearning with Tensor Representation
Report No. UIUCDCS-R-2006-276 UILU-ENG-2006-748 Learnng wth Tensor Representaton by Deng Ca, Xaofe He, and Jawe Han Aprl 2006 Learnng wth Tensor Representaton Deng Ca Xaofe He Jawe Han Department of Computer
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationMulti-layer neural networks
Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent
More informationDepartment of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING
Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory MACHINE LEARNING Vasant Honavar Artfcal Intellgence Research Laboratory Department of Computer Scence Bonformatcs
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationA NEW ALGORITHM FOR FINDING THE MINIMUM DISTANCE BETWEEN TWO CONVEX HULLS. Dougsoo Kaown, B.Sc., M.Sc. Dissertation Prepared for the Degree of
A NEW ALGORITHM FOR FINDING THE MINIMUM DISTANCE BETWEEN TWO CONVEX HULLS Dougsoo Kaown, B.Sc., M.Sc. Dssertaton Prepared for the Degree of DOCTOR OF PHILOSOPHY UNIVERSITY OF NORTH TEXAS May 2009 APPROVED:
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationMultilayer neural networks
Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationLinear discriminants. Nuno Vasconcelos ECE Department, UCSD
Lnear dscrmnants Nuno Vasconcelos ECE Department UCSD Classfcaton a classfcaton problem as to tpes of varables e.g. X - vector of observatons features n te orld Y - state class of te orld X R 2 fever blood
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationDifferentiating Gaussian Processes
Dfferentatng Gaussan Processes Andrew McHutchon Aprl 17, 013 1 Frst Order Dervatve of the Posteror Mean The posteror mean of a GP s gven by, f = x, X KX, X 1 y x, X α 1 Only the x, X term depends on the
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationBy : Moataz Al-Haj. Vision Topics Seminar (University of Haifa) Supervised by Dr. Hagit Hel-Or
By : Moataz Al-Haj Vson Topcs Semnar (Unversty of Hafa) Supervsed by Dr. Hagt Hel-Or -Introducton to SVM : Hstory and motvaton -Problem defnton -The SVM approach: The Lnear separable case -SVM: Non Lnear
More informationA Study on L2-Loss (Squared Hinge-Loss) Multi-Class SVM
A Study on L2-Loss (Squared Hnge-Loss) Mult-Class SVM Chng-Pe Lee and Chh-Jen Ln Department of Computer Scence, Natonal Tawan Unversty, Tape 10617, Tawan Keywords: Support vector machnes, Mult-class classfcaton,
More information