17 Support Vector Machines
|
|
- Tamsyn Haynes
- 6 years ago
- Views:
Transcription
1 17 We now dscuss an nfluental and effectve classfcaton algorthm called (SVMs). In addton to ther successes n many classfcaton problems, SVMs are responsble for ntroducng and/or popularzng several mportant deas to machne learnng, namely, kernel methods, maxmum margn methods, convex optmzaton, and sparsty/support vectors. Unlke the mostly-bayesan treatment that we have gven n ths course, SVMs are based on some very sophstcated Frequentst arguments (based on a theory called Structural Rsk Mnmzaton and VC-Dmenson) whch we wll not dscuss here, although there are many close connectons to Bayesan formulatons Maxmzng the margn Suppose we are gvenn tranng vectors{(x,y )}, wherex R D,y { 1,1}. We want to learn a classfer f(x) = w T φ(x)+b (1) so that the classfer s output for a newxs sgn(f(x)). Suppose that our tranng data are lnearly-separable n the feature space φ(x),.e., as llustrated n Fgure 1, the two classes of tranng exemplars are suffcently well separated n the feature space that one can draw a hyperplane between them (e.g., a lne n 2D, or plane n 3D). If they are lnearly separable then n almost all cases there wll be many possble choces for the lnear decson boundary, each one of whch wll produce no classfcaton errors on the tranng data. Whch one should we choose? If we place the boundary very close to some of the data, there seems to be a greater danger that we wll msclassfy test data, especally when the tranng data are alsmot certany nosy. Ths motvates the dea of placng the boundary to maxmze the margn, that s, the dstance from the hyperplane to the closest data pont n ether class. Ths can be thought of havng the largest margn for error f you are drvng a fast car between a scattered set of obstacles, t s safest to fnd a path that stays as far from them as possble. More precsely, n a maxmum margn method, we want to optmze the followng objectve functon: max w,b mn dst(x,w,b) (2) such that, for all,y (w T φ(x )+b) 0 (3) where dst(x, w, b) s the Eucldean dstance from the feature pont φ(x) to the hyperplane defned bywandb. Wth ths objectve functon we are maxmzng the dstance from the decson boundary w T φ(x) + b = 0 to the nearest pont. The constrants force us to fnd a decson boundary that classfes all tranng data correctly. That s, for the classfer a tranng pont correctly y and w T φ(x )+b should have the same sgn, n whch case ther product must be postve. Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 115
2 y= f = 1 y= f = 0 y= f = 11 y= 1 f = 1 y= f = 0 y= 1 f = 1 margn Fgure 1: Left: the margn for a decson boundary s the dstance to the nearest data pont. Rght: In SVMs, we fnd the boundary wth maxmum margn. (Fgure from Pattern Recognton and Machne Learnng by Chrs Bshop.) It can be shown that the dstance from a pont φ(x ) to a hyperplane w T φ(x)+b = 0 s gven by wt φ(x )+b, or, snce y w tells us the sgn of f(x ), y (w T φ(x )+b). Ths can be seen ntutvely by w wrtng the hyperplane n the form f(x) = w T (φ(x ) p), where p s a pont on the hyperplane such thatw T p = b. The vector fromφ(x ) to the hyperplane projected ontow/ w gves a vector from the hyperplane to the the pont; the length of ths vector s the desred dstance. Substtutng ths expresson for the dstance functon nto the above objectve functon, we get: y max w,b mn (w T φ(x )+b) w (4) such that, for all,y (w T φ(x )+b) 0 (5) Note that, because of the normalzaton by w n (4), the scale of w s arbtrary n ths objectve functon. That s, f we were to multply w and b by some real scalar α, the factors of α n the numerator and denomnator wll cancel one another. Now, suppose that we choose the scale so that the nearest pont to the hyperplane, x, satsfes y (w T φ(x )+b) = 1. Wth ths assumpton the mn n Eqn (4) becomes redundant and can be removed. Thus we can rewrte the objectve functon and the constrant as 1 max w,b w (6) such that, for all,y (w T φ(x )+b) 1 (7) Fnally, as a last step, snce maxmzng 1/ w s the same as mnmzng w 2 /2, we can re-express the optmzaton problem as 1 mn w,b 2 w 2 (8) such that, for all,y (w T φ(x )+b) 1 (9) Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 116
3 Ths objectve functon s a quadratc program, or QP, because the objectve functon s quadratc n the unknowns, and all of the constrants are lnear n the unknowns. A QP has a sngle global mnma, whch can be found effcently wth current optmzaton packages. In order to understand ths optmzaton problem, we can see that the constrants wll be actve for only a few dataponts. That s, only a few dataponts wll be close to the margn, thereby constranng the soluton. These ponts are called the support vectors. Small movements of the other data ponts have no effect on the decson boundary. Indeed, the decson boundary s determned only by the support vectors. Of course, movng ponts to wthn the margn of the decson boundary wll change whch ponts are support vectors, and thus change the decson boundary. Ths s n constrast to the probablstc methods we have seen earler n the course, n whch the postons of all data ponts affect the locaton of the decson boundary Slack Varables for Non-Separable Datasets Many datasets wll not be lnearly separable. As a result, there wll be no way to satsfy all the constrants n Eqn. (9). One way to cope wth such datasets and stll learn useful classfers s to loosen some of the constrants by ntroducng slack varables. Slack varables are ntroduced to allow certan constrants to be volated. That s, certan tranng ponts wll be allowed to be wthn the margn. We want the number of ponts wthn the margn to be as small as possble, and of course we want ther penetraton of the margn to be as small as possble. To ths end, we ntroduce a slack varable ξ, one for each datapont. (ξ s the Greek letter x, pronounced ks. ). The slack varable s ntroduced nto the optmzaton problem n two ways. Frst, the slack varable ξ dctates the degree to whch the constrant on the th datapont can be volated. Second, by addng the slack varable to the energy functon we are amng to smultaneously mnmze the use of the slack varables. Mathematcally, the new optmzaton problem can be expressed as mn ξ +λ 1 w,b,ξ 1:N 2 w 2 (10) such that, for all,y (w T φ(x )+b) 1 ξ andξ 0 (11) As dscussed above, we am to both maxmze the margn and mnmze volaton of the margn constrants. Ths objectve functon s stll a QP, and so can be optmzed wth a QP lbrary. However, t does have a much larger number of optmzaton varables, namely, one slack varable ξ must now be optmzed for each datapont. In practce, SVMs are normally optmzed wth specal-purpose optmzaton procedures desgned specfcally for SVMs. Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 117
4 ξ > 1 f y= = 11 y= f = 0 f y= = 1 ξ < 1 ξ = 0 ξ = 0 Fgure 2: The slack varablesξ 1 for msclassfed ponts, and0 < ξ < 1 for ponts close to the decson boundary. (Fgure from Pattern Recognton and Machne Learnng by Chrs Bshop.) Loss Functons In order to better understand the behavor of SVMs, and how they compare to other methods, we wll analyze them n terms of ther loss functons. 1 In some cases, ths loss functon mght come from the problem beng solved: for example, we mght pay a certan dollar amount f we ncorrectly classfy a vector, and the penalty for a false postve mght be very dfferent for the penalty for a false negatve. The rewards and losses due to correct and ncorrect classfcaton depend on the partcular problem beng optmzed. Here, we wll smply attempt to mnmze the total number of classfcaton errors, usng a penalty s called the 0-1 Loss: L 0 1 (x,y) = { 1 yf(x) < 0 0 otherwse (12) (Note that yf(x) > 0 s the same as requrng that y and f(x) have the same sgn.) Ths loss functon says that we pay a penalty of 1 when we msclassfy a new nput, and a penalty of zero f we classfy t correctly. Ideally, we would choose the classfer to mnmze the loss over the new test data that we are gven; of course, we don t know the true labels, and nstead we optmze the followng surrogate objectve functon over the tranng data: E(w) = L(x,y )+λr(w) (13) 1 A loss functon specfes a measure of the qualty of a soluton to an optmzaton problem. It s the penalty functon that tell us how badly we want to penalze errors n a models ablty to ft the data. In probablstc methods t s typcally the negatve log lkelhood or the negatve log posteror. Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 118
5 E(z) z Fgure 3: Loss functons, E(z), for learnng, for z = y f(x). Black: 0-1 loss. Red: LR loss. Green: Quadratc loss ((z 1) 2 ). Blue: Hnge loss. (Fgure from Pattern Recognton and Machne Learnng by Chrs Bshop.) wherer(w) s a regularzer meant to prevent overfttng (and thus mprove performance on future test data). The basc assumpton s that loss on the tranng set should correspond to loss on the test set. If we can get the classfer to have small loss on the tranng data, whle also beng smooth, then the loss we pay on new data ought to not be too bg ether. Ths optmzaton framework s equvalent to MAP estmaton as dscussed prevously 2 ; however, here we are not at all concerned wth probabltes. We only care about whether the classfer gets the rght answers or not. Unfortunately, optmzng a classfer for the 0-1 loss s very dffcult: t s not dfferentable everywhere, and, where t s dfferentable, the gradent s zero everywhere. There are a set of algorthms called Perceptron Learnng whch attempt to do ths; of these, the Voted Perceptron algorthm s consdered one of the best. However, these methods are somewhat complex to analyze and we wll not dscuss them further. Instead, we wll use other loss functons that approxmate 0-1 loss. We can see that maxmum lkelhood logstc regresson s equvalent to optmzaton wth the followng loss functon: L LR = ln ( 1+e yf(x)) (14) whch s the negatve log-lkelhood of a sngle data vector. Ths functon s a poor approxmaton to the 0-1 loss, and, f all we care about s gettng the labels rght (and not the class probabltes), then we ought to search for a better approxmaton. SVMs mnmze the slack varables, whch, from the constrants, can be seen to gve the hnge loss: { 1 yf(x) 1 yf(x) > 0 L hnge = (15) 0 otherwse 2 However, not all loss functons can be vewed as the negatve log of a vald lkelhood functon, although all negatve-log lkelhoods can be vewed as loss functons for learnng. Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 119
6 That s, when a data pont s correctly classfed and further from the decson boundary than the margn, then the loss s zero. In ths way t s nsenstve to correctly-classfed ponts far from the boundary. But when the pont s wthn the margn or ncorrectly classfed, then the loss s smply the magntude of the slack varable,.e. ξ = 1 yf(x), wheref(x) = w T φ(x)+b. The hnge loss therefore ncreases lnearly for msclassfed ponts, whch s not nearly as quckly as the LR loss The Lagrangan and the Kernel Trck We now use the Lagrangan to transform the SVM problem n a way that wll lead to a powerful generalzaton. For smplcty here we assume that the dataset s lnearly separable, and so we drop the slack varables. The Langrangan allows us to take the constraned optmzaton problem above n Eqn. (9) and re-express t as an unconstraned problem. The Lagrangan for the SVM objectve functon n Eqn. (9), wth Lagrange multplers a 0, s: L(w,b,a 1:N ) = 1 2 w 2 a ( y ( w T φ(x )+b ) 1 ) (16) The mnus sgn wth the secon term s used because we are mnmzng wth respect to the frst term, but maxmzng the second. Settng the dervatve of dl dl = 0 and = 0 gves the followng constrants on the soluton: dw db w = a y φ(x ) (17) y a = 0 (18) Usng (17) we can substtute for w n 16. Then smplfyng the result, and makng use of the next constrant (17), one can derve what s often called the dual Lagrangan: L(a 1:N ) = a 1 a a j y y j φ(x ) T φ(x j ) (19) 2 Whle ths objectve functon s actually more expensve to evaluate than the prmal Lagrangan (.e., 16), t does lead to the followng modfed form j L(a 1:N ) = a 1 a a j y y j k(x,x j ) (20) 2 where k(x,x j ) = φ(x ) T φ(x j ) s called a kernel functon. For example, f we used the basc lnear features,.e.,φ(x) = x, then k(x,x j ) = x T x j. The advantage of the kernel functon representaton s that t frees us from thnkng about the features drectly; the classfer can be specfed solely n terms of the kernel. Any kernel that Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 120 j
7 satsfes a specfc techncal condton 3 s a vald kernel. For example, one of the most commonlyused kernels s the RBF kernel : k(x,z) = e γ x z 2 (21) whch corresponds to a vector of features φ(x) wth nfnte dmensonalty! (Specfcally, each element ofφs a Gaussan bass functon wth vanshng varance). Note that, just as most constrants n the Eq. (9) are not actve, the same wll be true here. That s, only some constrants wll be actve (e the support vectors), and for all other constrants, a = 0. Hence, once the model s learned, most of the tranng data can be dscarded; only the support vectors and ther a values matter. The one fnal thng we need to do s estmate the bas b. We now know the values for a for all support vectors (.e., for data constrants that are consdered actve), and hence we know w. Accordngly, for all support vectors we know, by assumpton above, that From ths one can easly solve forb. f(x ) = w T φ(x )+b = 1. (22) Applyng the SVM to new data. For the kernel representaton to be useful, we need to be able to classfy new data wthout needng to evaluate the weghts. Ths can be done as follows: f(x new ) = w T φ(x new )+b (23) ( T = a y φ(x )) φ(x new )+b (24) = a y k(x,x new )+b (25) Generalzng the kernel representaton to non-separable datasets (.e., wth slack varables) s straghtforward, but wll not be covered n ths course Choosng parameters To determne an SVM classfer, one must select: The regularzaton weght λ The parameters to the kernel functon The type of kernel functon These values are typcally selected ether by hand-tunng or cross-valdaton. 3 Specfcally, suppose one s gven N nput ponts x 1:N, and forms a matrx K such that K,j = k(x,x j ). Ths matrx must be postve semdefnte (.e., all egenvalues non-negatve) for all possble nput sets for k to be a vald kernel. Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 121
8 Fgure 4: Nonlnear classfcaton boundary learned usng kernel SVM (wth an RBF kernel). The crcled ponts are the support vectors; curves are socontours of the decson functon (e.g., the decson boundary f(x) = 0, etc.) (Fgure from Pattern Recognton and Machne Learnng by Chrs Bshop.) 17.6 Software Lke many methods n machne learnng there s freely avalable software on the web. For SVM classfcaton and regresson there s well-known software developed by Thorsten Joachms, called SVMlght, (URL: ). Copyrght c 2015 Aaron Hertzmann, Davd J. Fleet and Marcus Brubaker 122
Kernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far
More informationSupport Vector Machines
Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationSupport Vector Machines
/14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More information14 Lagrange Multipliers
Lagrange Multplers 14 Lagrange Multplers The Method of Lagrange Multplers s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve
More informationLagrange Multipliers Kernel Trick
Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x
More informationLecture 3: Dual problems and Kernels
Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationC4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )
C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationINF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018
INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More information15 Lagrange Multipliers
15 The Method of s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve physcs equatons), t s used for several ey dervatons n
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationKristin P. Bennett. Rensselaer Polytechnic Institute
Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal
More informationFisher Linear Discriminant Analysis
Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationChapter 6 Support vector machine. Séparateurs à vaste marge
Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationMaximal Margin Classifier
CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org
More informationSupport Vector Machines
CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More information18-660: Numerical Methods for Engineering Design and Optimization
8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationCSE 252C: Computer Vision III
CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel
More informationWe present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.
CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationSupport Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012
Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationLecture 6: Support Vector Machines
Lecture 6: Support Vector Machnes Marna Melă mmp@stat.washngton.edu Department of Statstcs Unversty of Washngton November, 2018 Lnear SVM s The margn and the expected classfcaton error Maxmum Margn Lnear
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationOnline Classification: Perceptron and Winnow
E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng
More informationThe exam is closed book, closed notes except your one-page cheat sheet.
CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces
More informationAdvanced Introduction to Machine Learning
Advanced Introducton to Machne Learnng 10715, Fall 2014 The Kernel Trck, Reproducng Kernel Hlbert Space, and the Representer Theorem Erc Xng Lecture 6, September 24, 2014 Readng: Erc Xng @ CMU, 2014 1
More informationImage classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?
Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More information3.1 ML and Empirical Distribution
67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum
More informationCOS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased
More informationUVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.
UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 10: Classfca8on wth Support Vector Machne (cont. ) Yanjun Q / Jane Unversty of Vrgna Department of Computer Scence 9/6/14
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs246.stanford.edu 2/19/18 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 2 Hgh dm. data Graph data Infnte
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationKernel Methods and SVMs
Statstcal Machne Learnng Notes 7 Instructor: Justn Domke Kernel Methods and SVMs Contents 1 Introducton 2 2 Kernel Rdge Regresson 2 3 The Kernel Trck 5 4 Support Vector Machnes 7 5 Examples 1 6 Kernel
More informationCS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning
CS9 Problem Set #3 Solutons CS 9, Publc Course Problem Set #3 Solutons: Learnng Theory and Unsupervsed Learnng. Unform convergence and Model Selecton In ths problem, we wll prove a bound on the error of
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More information13 Principal Components Analysis
Prncpal Components Analyss 13 Prncpal Components Analyss We now dscuss an unsupervsed learnng algorthm, called Prncpal Components Analyss, or PCA. The method s unsupervsed because we are learnng a mappng
More informationIntro to Visual Recognition
CS 2770: Computer Vson Intro to Vsual Recognton Prof. Adrana Kovashka Unversty of Pttsburgh February 13, 2018 Plan for today What s recognton? a.k.a. classfcaton, categorzaton Support vector machnes Separable
More informationLearning from Data 1 Naive Bayes
Learnng from Data 1 Nave Bayes Davd Barber dbarber@anc.ed.ac.uk course page : http://anc.ed.ac.uk/ dbarber/lfd1/lfd1.html c Davd Barber 2001, 2002 1 Learnng from Data 1 : c Davd Barber 2001,2002 2 1 Why
More informationEvaluation of classifiers MLPs
Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationUVA CS / Introduc8on to Machine Learning and Data Mining
UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 11: Classfca8on wth Support Vector Machne (Revew + Prac8cal Gude) Yanjun Q / Jane Unversty of Vrgna Department of Computer
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationGenerative classification models
CS 675 Intro to Machne Learnng Lecture Generatve classfcaton models Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square Data: D { d, d,.., dn} d, Classfcaton represents a dscrete class value Goal: learn
More informationMIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU
Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern
More informationStructural Extensions of Support Vector Machines. Mark Schmidt March 30, 2009
Structural Extensons of Support Vector Machnes Mark Schmdt March 30, 2009 Formulaton: Bnary SVMs Multclass SVMs Structural SVMs Tranng: Subgradents Cuttng Planes Margnal Formulatons Mn-Max Formulatons
More informationCSE 546 Midterm Exam, Fall 2014(with Solution)
CSE 546 Mdterm Exam, Fall 014(wth Soluton) 1. Personal nfo: Name: UW NetID: Student ID:. There should be 14 numbered pages n ths exam (ncludng ths cover sheet). 3. You can use any materal you brought:
More informationProbabilistic Classification: Bayes Classifiers. Lecture 6:
Probablstc Classfcaton: Bayes Classfers Lecture : Classfcaton Models Sam Rowes January, Generatve model: p(x, y) = p(y)p(x y). p(y) are called class prors. p(x y) are called class condtonal feature dstrbutons.
More informationFMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu
FMA901F: Machne Learnng Lecture 5: Support Vector Machnes Crstan Smnchsescu Back to Bnary Classfcaton Setup We are gven a fnte, possbly nosy, set of tranng data:,, 1,..,. Each nput s pared wth a bnary
More information6.854J / J Advanced Algorithms Fall 2008
MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationMaximum Likelihood Estimation (MLE)
Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y
More informationClassification learning II
Lecture 8 Classfcaton learnng II Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square Logstc regresson model Defnes a lnear decson boundar Dscrmnant functons: g g g g here g z / e z f, g g - s a logstc functon
More informationp 1 c 2 + p 2 c 2 + p 3 c p m c 2
Where to put a faclty? Gven locatons p 1,..., p m n R n of m houses, want to choose a locaton c n R n for the fre staton. Want c to be as close as possble to all the house. We know how to measure dstance
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationSDMML HT MSc Problem Sheet 4
SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be
More information