Margin and Radius Based Multiple Kernel Learning

Size: px
Start display at page:

Download "Margin and Radius Based Multiple Kernel Learning"

Transcription

1 Margn and Radus Based Multple Kernel Learnng Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro Unversty of Geneva, Computer Scence Department 7, route de Drze, Battelle batment A, 1227 Carouge, Swtzerland {Huyen.Do, Alexandros.Kalouss, Adam.Woznca, Abstract. A serous drawbac of ernel methods, and Support Vector Machnes (SVM) n partcular, s the dffculty n choosng a sutable ernel functon for a gven dataset. One of the approaches proposed to address ths problem s Multple Kernel Learnng (MKL) n whch several ernels are combned adaptvely for a gven dataset. Many of the exstng MKL methods use the SVM objectve functon and try to fnd a lnear combnaton of basc ernels such that the separatng margn between the classes s maxmzed. However, these methods gnore the fact that the theoretcal error bound depends not only on the margn, but also on the radus of the smallest sphere that contans all the tranng nstances. We present a novel MKL algorthm that optmzes the error bound tang account of both the margn and the radus. The emprcal results show that the proposed method compares favorably wth other state-of-the-art MKL methods. Key words: Learnng Kernel Combnaton, Support Vector Machnes, convex optmzaton 1 Introducton Over the last few years ernel methods [1, 2], such as Support Vector Machnes (SVM), have proved to be effcent machne learnng tools. They wor n a feature space mplctly defned by a postve sem-defnte ernel functon, whch allows the computaton of nner products n feature spaces usng only the objects n the nput space. The man lmtaton of ernel methods stems from the fact that n general t s dffcult to select a ernel functon, and hence a feature mappng, that s sutable for a gven problem. To address ths problem several several attempts have been recently made to learn ernel operators drectly from the data [3 12]. The proposed methods dffer n the objectve functons (e.g. CV rs, margn based, algnment, etc.) as well as n the classes of ernels that they consder (e.g. combnaton of fnte or nfnte set of basc ernels). The most popular approach n the context of ernel learnng consders a fnte set of predefned basc ernels whch are combned so that the margnbased objectve functon of SVM s optmzed. The learned ernel K s a lnear

2 2 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro combnaton of basc ernels K,.e. K(x, x ) = µ K (x, x ), µ 0, where M s the number of basc ernels, and x and x are nput objects. The weghts µ of the ernels are ncluded n the margn-based objectve functon. Ths settng s commonly referred to as the Multple Kernel Learnng (MKL). The MKL formulaton has been ntroduced n [3] as a sem-defnte programmng problem, whch scaled well only for small problems. [7] extended that wor and proposed a faster method based on the conc dualty of MKL and solved the problem usng Sequental Mnmal Optmzaton (SMO). [5] reformulated the MKL problem as sem-nfnte lnear problem. In [6] the authors proposed an adjustment n the cost functon of [5] to mprove predctve performance. Although the MKL approach to ernel learnng has some lmtatons (e.g. one has to choose the basc ernels), t s wdely used because of ts smplcty, nterpretablty and good performance. The MKL methods that use the SVM objectve functon do not explot the fact that the error bound of SVM depends not only on the separatng margn, but also on the radus of the smallest sphere that encloses the data. In fact even the standard SVM algorthms do not explot the latter, because for a gven feature space the radus s fxed. However n the context of MKL the radus s not fxed but s a functon of the weghts of the basc ernels. In ths paper we propose a novel MKL method that taes account of both radus and margn to optmze the error bound. Followng a number of transformatons, these problems are cast n a form that can be solved by the two step optmzaton algorthm gven n [6]. The paper s organzed as follows. In Secton 2 we ntroduce the general MKL framewor. Next, n Secton 3 we dscuss the varous error bounds that motvate the use of the radus. The man contrbuton of the wor s presented n Secton 4 where we propose a new method for multple ernel learnng that ams to optmze the margn- and radus-dependent error bound. In Secton 5 we present the emprcal results on several benchmar datasets. Fnally, we conclude wth Secton 6 where we also present ponters to future wor. 2 Multple ernel learnng problem Consder a mappng of nstances x X, to a new feature space H x Φ (x) H (1) Ths mappng can be performed by a ernel functon K (x,x ) whch s defned as the nner product of the mages of two nstances x and x n H,.e. K (x,x ) = Φ (x),φ (x ) ; H may have even nfnte dmensonalty. Typcally, the computaton of the nner product n H s done mplctly,.e. wthout havng to compute explctly the mages Φ (x) and Φ (x ). 2.1 Orgnal problem formulaton of MKL Gven a set of tranng examples S = {(x 1, y 1 ),..., (x l, y l )} and a set of basc ernel functons, Z = {K (x,x ) := 1,...M}, the goal of MKL s to optmze

3 Margn and Radus Based Multple Kernel Learnng 3 a cost functon Q(f(Z, µ)(x,x ), S) where f(z, µ)(x,x ) s some postve semdefnte functon of the set of the bass ernels, parametrzed by µ; most often a lnear combnaton of the form: f(z, µ)(x,x ) = µ K (x,x ), µ 0, µ = 1 (2) To smplfy notaton we wll denote f(z, µ) by K µ. In the remanng part of ths wor we wll only focus on the normalzed versons of K, defned as: K (x,x ) := K (x,x ) K (x,x) K (x,x ). (3) If K µ s a lnear combnaton of ernels then ts feature space H µ s gven by the mappng: x Φ µ (x) = ( µ 1 Φ 1 (x),..., µ M Φ M (x)) T H µ (4) where Φ (x) s the mappng to the H feature space assocated wth the K ernel, as ths was gven n Formula 1. In prevous wor wthn the MKL context the cost functon, Q, has taen dfferent forms such as the Kernel Target Algnment, whch measures the goodness of a ernel for a gven learnng tas [9], or the typcal SVM cost functon combnng classfcaton error and the margn [3, 5,6], or as n [4] any of the above wth an added regularzaton term for the complexty of the combned ernel. 3 Margn and radus based error bounds There are a number of theorems n statstcal learnng that bound the expected classfcaton error of the thresholded lnear classfer, that corresponds to the maxmum margn hyperplane, by quanttes that are related to the margn and the radus of the smallest sphere that encloses the data. Below we gve two of them that are applcable on lnearly separable and non-separable tranng sets, respectvely. Theorem 1 [10], Gven a tranng set S = {(x 1, y 1 ),..., (x l, y l )} of sze l, a feature space H and a hyperplane (w, b), the margn γ(w, b, S) and the radus R(S) are defned by y ( w,φ(x ) + b) γ(w, b, S) = mn (x,y ) S w R(S) = mn max Φ(x ) a a The maxmum margn algorthm L l : (X Y) l H R taes as nput a tranng set of sze l and returns a hyperplane n feature space such that the margn

4 4 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro γ(w, b, S) s maxmzed. Note that assumng the tranng set s separable means that γ > 0. Under ths assumpton, for all probablty measures P underlyng the data S, the expectaton of the msclassfcaton probablty has the bound p err (w, b) = P(sgn( w,φ(x) + b) Y ) E{p err (L l 1 (Z))} 1 { l E R 2 } (Z) γ 2 (L l (Z), Z) The expectaton s taen over the random draw of a tranng set Z of sze l 1 for the left hand sde and l for the rght hand sde. The followng theorem gves a smlar result for the error bound of the lnearly non-separable case. Theorem 2 [13], Consder thresholdng real-valued lnear functons L wth unt weght vectors on an nner product space H and fx γ R +. There s a constant c, such that for any probablty dstrbuton D on H {, } wth support n a ball of radus R around the orgn, wth probablty 1 δ over l random examples S, any hypothess f L has error no more than: err(f) D c + ξ 2 l (R2 2 γ 2 log 2 l + log 1 ), (5) δ where ξ = ξ(f, S, γ) s the margn slac vector wth respect to f and γ. It s clear from both theorems that the bound on the expected error depends not only on the margn but also on the radus of the data, beng a functon of the R 2 /γ 2 rato. Nevertheless standard SVM algorthms can gnore the dependency of the error bound on the radus because for a fxed feature space the radus s constant and can be smply gnored n the optmzaton procedure. However n the MKL scenaro where the H µ feature space s not fxed but depends on the parameter vector µ the radus s no longer fxed but t s a functon of µ and thus should not be gnored n the optmzaton procedure. The radus of the smallest sphere that contans all nstances n the H feature space defned by the Φ(x) mappng s computed by the followng formula [14]: mn R,Φ(x R2 (6) 0) s.t. Φ(x ) Φ(x 0 ) 2 R 2, It can be shown that f K µ s a lnear combnaton of ernels, of the form gven n Formula 2, then for the R µ radus of ts H µ feature space the followng nequaltes hold: max (µ R 2 ) R2 µ M µ R 2 max (R 2 ), (7) s.t. µ = 1

5 Margn and Radus Based Multple Kernel Learnng 5 where R s the radus of the component feature space H assocated wth the K ernel. The proof of the above statement s gven n the appendx. 4 MKL wth margn and radus optmzaton In the next sectons we wll show how we can mae drect use of the dependency of the error bound both on the margn and the radus n the context of the MKL problem n an effort to decrease even more the error bound than what s possble by optmzng only over the margn. 4.1 Soft margn MKL The standard l 2 -soft margn SVM s based on theorem 2 and learns maxmal margn hyperplanes whle controllng for the l 2 norm of the slac vector n an effort to optmze the error bound gven n equaton 5; as already mentoned prevously the radus although t appears n the error bound s not consdered n the optmzaton problem due to the fact that for a gven feature space t s fxed. The exact optmzaton problem solved by the l 2 -soft margn SVM s [13]: mn w,b,ξ 1 2 w,w + C 2 ξ 2 (8) s.t. y ( w,φ(x ) + b) 1 ξ, The soluton hyperplane (w, b ) of ths problem realzes the maxmum margn classfer wth geometrc margn γ = 1 w. When nstead of a sngle ernel we learn wth a combnaton of ernels K µ then the radus of the resultng feature space H µ depends on the parameters µ whch are also learned. We can proft from ths addtonal dependency and optmze not only for the margn but also for the radus, as Theorems 1 and 2 suggest, n the hope of reducng even more the error bounds than what would be possble by just focusng on the margn. A straghtforward way to do so s to alter the cost functon of the above optmzaton problem so that t also ncludes the radus. Thus we defne the prmal form of soft margn MKL optmzaton problem as follows: mn w,b,ξ,µ 1 2 w,w R2 µ + C 2 ξ 2 (9) s.t. y ( w,φ(x ) + b) 1 ξ,

6 6 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro Accountng for the form Φ µ of the feature space H µ, as t s gven n equaton 4, ths optmzaton problem can be rewrtten as: mn w,b,ξ,µ 1 2 w,w Rµ 2 + C 2 ξ 2 (10) s.t. y ( w, µ Φ (x ) + b) 1 ξ, where the w s the same as that of Formula 9 and equal to (w 1,...,w M ), R 2 µ can be computed by equaton 6. By lettng w := µ.w then w := µ w we can rewrte equaton 10 as 1 : mn w,b,ξ,µ 1 2 w,w µ R 2 µ + C 2 s.t. y ( w,φ (x ) + b) 1 ξ, µ = 1, µ 0, =1 ξ 2 (11) The non-negatvty of µ s requred to guarantee that the ernel combnaton s a vald ernel functon; the constrant =1 µ = 1 s added to mae the soluton nterpretable (ernel wth bgger weght can be nterpreted as more mportant one) and to get a specfc soluton (note that f µ s soluton of 11 (wthout the constrant =1 µ = 1), then λµ, λ R + s also ts soluton). We wll denote the cost functon of equaton 11 by F(w, b, ξ, µ). F s not a convex functon, ths s probably the man reason why n current MKL algorthms the radus s smply removed from the orgnal cost functon, therefore they do not really optmze the generalzaton error bound. From the set of nequaltes gven n equaton 7 we have R 2 µ µ R 2 and from ths we can get: F(w, b, ξ, µ) = 1 2 ( w,w w,w µ R 2 µ + C 2 l ξ2 (12) µ µ R 2 + C l 2 ξ2 = l 2 P M µ R 2 ξ2 ) µ R 2 l ξ2 ) = F(w, b, ξ, µ) w,w C µ + ( 1 2 w,w C µ + 2 P M µ R 2 The last nequalty holds because n the context we examne we have µ R 2 1. Ths s a result of the fact that we wor wth the normalzed feature spaces, 1 Note that f µ = 0 then from the dual form we have w = 0. In ths case, we use the conventon that 0 0 = 0.

7 Margn and Radus Based Multple Kernel Learnng 7 usng the normalzed ernels as these were defned n equaton 3, thus we have R 2 1 and snce µ = 1 t holds that: µ R 2 1. Snce F s an upper bound of F and moreover t s convex 2 we are gong to use t as our objectve functon. As a result, we propose to solve, nstead of the orgnal soft margn optmzaton problem gven n equaton 11, the followng upper boundng convex optmzaton problem: mn w,b,ξ,µ 1 2 s.t. y ( w,w C + µ 2 µ R 2 w,φ (x ) + b) 1 ξ, µ = 1, µ 0, =1 The dual functon of ths optmzaton problem s: W s (α, µ) = = 1 2 ξ 2 (13) M α α j y y j µ K (x,x j ) (14) j j α µ R 2 α, α 2C α α j y y j ( µ K (x,x j ) + µ R 2 δ j ) + C where δ j s the Kronecer δ defned to be 1 f = j and 0 otherwse. The dual optmzaton problem s gven as: max W s(α, µ) (15) α,µ s.t. α y = 0, α 0, α α j y y j K (x,x j ) = R2 C ξ2 ( µ R 2, )2 j In the next secton we wll show how we can solve ths new optmzaton problem. 4.2 Algorthm The dual functon 14 s quadratc wth respect to α and lnear wth respect to µ. One way to solve the optmzaton problem 13 s to use a two step teratve 2 The convexty of ths new functon can be easly proved by showng that the Hessan matrx s postve sem-defnte. α

8 8 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro algorthm such as the ones descrbed n [6], [10]. Followng such a two step approach, n the frst step we wll solve a quadratc problem that optmzes over (w, b), whle eepng µ fxed; as a consequence the resultng dual functon s a smple quadratc functon of α whch can be optmzed easly. In the second step we wll solve a lnear problem that optmzes over µ. More precsely, the formulaton of the optmzaton problem wth the twostep approach taes the followng form: where mn J(µ) (16) µ s.t. µ = 1, µ 0, =1 J(µ) = { 1 M w,w C l mnw,b 2 µ + 2 P M µ R 2 ξ2 s.t. y ( w (17),Φ (x ) + b) 1 ξ To solve the outer optmzaton problem,.e. mn µ J(µ), we use gradent descent method. At each teraton, we fx µ, compute the value of J(µ) and then compute the gradent of J(µ) wth respect to µ. The dual functon of Formula 17 s the W s (α, µ) functon already gven n Formula 14. Snce µ s fxed we now optmze only over α (the resultng dual optmzaton problem s much smpler compared to the orgnal soft margn dual optmzaton problem gven n Formula 15): max α W s(α, µ) s.t. α y = 0, α 0, whch has the same form as the SVM quadratc optmzaton problem, the only C dfference s that the C parameter here s equal to. P M µ R 2 For the strong dualty, at the optmal soluton α, the values of dual cost functon and prmal cost functon are equal. Thus the value of W s (α, µ), and the J(µ) value, s gven by: W s (α, µ) = 1 2 α α j y y j ( µ K (x,x j ) j + µ R 2 δ j ) + C The last step of the algorthm s to compute the gradent of the J(µ) functon, Formula 17, wth respect to µ. As [6] have ponted out, we can use the theorem α

9 Margn and Radus Based Multple Kernel Learnng 9 of Bonnans and Shapro [15] to compute gradents of such functons. Hence, the gradent s n the followng form: J(µ) µ = 1 2 j α α j y y j (K (x,x j ) + R2 C δ j) To compute the optmal step n the gradent descent we used lne search. The complete two-step procedure s gven n Algorthm 1. Algorthm 1 R-MKL Intalze µ 1 = 1 for = 1,..., M M repeat Set Rµ 2 = P M µt R 2 compute P J(µ t ) as the soluton of a quadratc optmzaton problem wth K := M µt K compute J µ for = 1,..., M compute optmal step γ t µ t+1 µ t J(µ) + γ t µ untl stopcrtera s true 4.3 Computatonal complexty At each step of the teraton we have to compute the soluton of a standard SVM, wth ernel K = =1 µ C K, and C equal to P M, whch s a quadratc programmng problem wth a complexty of O(n 3 ) where n s number of nstances. µ R 2 Moreover, when µ s updated we have to recompute the approxmaton of Rµ 2 ; the complexty of ths procedure s lnear n the number of ernels, O(M). 5 Experments We expermented wth ten dfferent datasets. Sx of them were taen from the UCI repostory (Ionosphere, Lver, Sonar, Wdbc, Wpbc, Mus1), whle four come from the doman of genomcs and proteomcs (ColonCancer, CentralNervousSystem, FemaleVsMale, Leuema) [16]; these four are characterzed by small sample and hgh dmensonalty morphology. A short descrpton of the datasets s gven n Table 1. We expermented wth two dfferent types of basc ernels,.e. polynomals and Gaussans, and performed two sets of experments. In the frst set of experments we used both types of ernels and n the second one we focused only on Gaussans ernels. For each set of experments the total number of basc ernels was 20; for the frst set we used polynomal ernels of degree one, two, and three and 17 Gaussans wth bandwdth δ that ranged from 1 to 17 wth

10 10 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro Table 1. Short descrpton of the classfcaton datasets used Dataset #Inst #Attr #Class1 #Class2 Ionosphere Lver Sonar Wdbc Wpbc Mus ColonCancer CentralNervous FemaleVSMale Leuema a step of one; for the second set of experments we only used Gaussan ernels wth bandwdth δ that ranged from 1 to 20 wth a step of one. We compared our MKL algorthm (denoted as R-MKL) wth two state-of-theart MKL algorthms: Support Kernel Machne (SKM) [7], and SmpleMKL [6]. We estmate the classfcaton error usng 10-fold cross valdaton. For comparson purposes we also provde the performances of the best sngle ernel (BK) and the majorty classfer (MC); the latter always predcts the majorty class. The performance of the BK s that of the best sngle ernel estmated also by 10-fold cross valdaton, snce t s the best result after seeng the performance of all ndvdual ernels on the avalable data t s optmstcally based. We tuned the parameter C n an nner-loop 10-fold cross-valdaton choosng the values from the set {0.1, 1, 10, 100}. All algorthms termnate when the dualty gap s smaller than All nput ernel matrces are normalzed by equaton 3. We compared the sgnfcance level of the performance dfferences of the algorthms wth McNemar s test [17], where the level of sgnfcance s set to We also establshed a ranng schema of the examned MKL algorthms based on the results of the parwse comparsons [18]. More precsely, f an algorthm s sgnfcantly better than another t s credted wth one pont; f there s no sgnfcant dfference between two algorthms then they are credted wth 0.5 ponts; fnally, f an algorthm s sgnfcantly worse than another t s credted wth zero ponts. Thus, f an algorthm s sgnfcantly better than all the others for a gven dataset t has a score of two. We gve the full results n Tables 2, 3. For each algorthm we report a trplet n whch the frst element s the estmated classfcaton error, the second s the number of selected ernels, and the last s the above descrbed ran. Our ernel combnaton algorthm does remarably well n the frst set of experments, Table 2, n whch t s sgnfcantly better than both other algorthms n four datasets and sgnfcantly worse n two; for the four remanng datasets there are no sgnfcant dfferences. Note that n the cases of Wpbc and CentralNervousSystem all algorthms have a performance that s smlar to that of the majorty classfer,.e. the learned models do not have any dscrmnatory power. By examnng the classfcaton performances of the ndvdual ernels on these datasets we see that none of them had a performance that was bet-

11 Margn and Radus Based Multple Kernel Learnng 11 ter than that of the majorty classfer; ths could explan the bad behavor of the dfferent ernel combnaton schemata. Overall, for ths set of experments R-MKL gets 12 sgnfcance ponts over the dfferent datasets, SKM 10.5, and SmpleMKL 7.5. The performance mprovements of R-MKL over the two other methods are are qute mpressve on those datasets on whch R-MKL performs well; more precsely ts classfcaton error s around 30%, 50%, and 40%, of that of the other algorthms for Wdbc, Mus1, and Leuema datasets respectvely. In the second set of experments,table 3, all methods perform very poorly n seven out of ten datasets; ther classfcaton performance s smlar to that of the majorty classfer. In the remanng datasets, wth the excepton of ColonCancer for whch SKM faled (we used the mplementaton provded by the authors of the algorthm and t returns wth some errors), there s no sgnfcant dfference between the three algorthms. The collectvely bad performance n the last seven databases s explaned by the fact that none of the basc ernels had a classfcaton error that was better than that of the majorty classfer. Overall, for ths set of experments SKM scores 9 ponts, and Smple MKL and R-MKL score 10.5 ponts each. Comparng the number of selected ernels by the dfferent ernel combnaton methods usng a pared t-test (sgnfcance level of 0.05) revealed no statstcally sgnfcant dfferences between the three algorthms on both sets of experments. In an effort to get an emprcal estmaton of the qualty of the approxmaton of the radus that we used to mae the optmzaton problems convex, we P M µ R 2 R2 µ computed the approxmaton error defned as R. We computed ths 2 µ error over the dfferent folds of the ten-fold cross-valdaton for each dataset. The average approxmaton error over the dfferent datasets was We also computed ths error over 1000 random values of µ for each dataset and the average error was Thus, the emprcal evdence seems to ndcate that the Rµ 2 µ R 2 bound s relatvely tght, at least for the datasets we examned. 6 Concluson and future wor In ths paper we presented a new ernel combnaton method that ncorporates n ts cost functon not only the margn but also the radus of the smallest sphere that encloses the data. Ths dea s a drect mplementaton of well nown error bounds from statstcal learnng theory. To the best of our nowledge ths s the frst wor n whch the radus s used together wth the margn n an effort to mnmze the generalzaton error. Even though the resultng optmzaton problems were non-convex and we had to use an upper bound on the radus to get convex forms, the emprcal results were qute encouragng. In partcular, our method competed wth other state-of-the-art methods for ernel combnaton, thus demonstratng the beneft and the potental of the proposed technque. Fnally, we menton that t s stll a challengng research drecton to fully explot the examned generalzaton bound. In future wor we would le to examne optmzaton technques for drectly solvng the non-convex optmzaton problem presented n Formula 11. In partc-

12 12 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro Table 2. Results for the frst experments, where both polynomal and Gaussan ernels are used. Each trplet x,y,z gves respectvely the classfcaton error, the number of selected ernels, and the number of sgnfcance pont that the algorthm scores for the gven experment set and dataset. Columns BK and MC gve the errors of the best sngle ernel and the majorty classfer, respectvely. D. Set SKM Smple R-MKL BK MC Ionos ,02, ,02, ,02, Lver 33.82,05, ,13, ,03, Sonar 15.50,01, ,01, ,01, Wdbc 11.25,03, ,18, ,18, Wpbc 23.68,17, ,01, ,01, Mus ,07, ,18, ,01, Colon ,18, ,18, ,18, CentNe ,17, ,17, ,17, Female ,20, ,18, ,18, Leue ,18, ,18, ,18, Table 3. Results for the second set of experments, where only Gaussan ernels are used. The table contans the same nformaton as the prevous one. D. Set SKM Smple R-MKL BK MC Ionos ,02, ,04, ,03, Lver 33.53,03, ,20, ,16, Sonar 15.50,01, ,01, ,01, Wdbc 37.32,03, ,19, ,20, Wpbc 23.68,20, ,19, ,20, Mus ,14, ,19, ,20, Colon. NA 35.00,20, ,20, CentNe ,20, ,20, ,20, Female ,20, ,20, ,20, Leue ,20, ,20, ,20, ular, we wll examne whether t s possble to decompose the cost functon as a sum convex and concave functons, or to represent t as d.m functons (dfference of two monotonc functons) [19, 20]. Addtonally, we plan to analyze the bound Rµ 2 µ R 2 and see how t relates wth the real optmal value. Acnowledgments The wor reported n ths paper was partally funded by the European Commsson through EU projects DropTop (FP ), DebugIT (FP ) and e-lico (FP ). The support of the Swss NSF (Grant /1) s also gratefully acnowledged. References 1. Shawe-Taylor, J., Crstann, N.: Kernel methods for pattern analyss. Cambrdge Unversty Press (2004)

13 Margn and Radus Based Multple Kernel Learnng Schölopf, B., Smola, A.J.: Learnng wth Kernels: Support Vector Machnes, Regularzaton, Optmzaton, and Beyond. MIT Press, Cambrdge, MA, USA (2001) 3. Lancret, G., Crstann, N., Bartlett, P., Ghaou, L.E.: Learnng the ernel matrx wth semdefnte programmng. Journal of Machne Learnng Research 5 (2004) Ong, C.S., Smola, A.J., Wllamson, R.C.: Learnng the ernel wth hyperernels. Journal of Machne Learnng Research 6 (2005) Sonnenburg, S., Ratsch, G., Schafer, C.: A general and effcent multple ernel learnng algorthm. Journal of Machne Learnng Research 7 (2006) Bach, F., Raotomamonjy, A., Canu, S., Grandvalet, Y.: SmpleMKL. Journal of Machne Learnng Research (2008) 7. Bach, F.R., Lancret, G.R.G., Jordan, M.I.: Multple ernel learnng, conc dualty, and the smo algorthm. In: ICML 04: Proceedngs of the twenty-frst nternatonal conference on Machne learnng, New Yor, NY, USA, ACM (2004) 6 8. Lancret, G., Be, T.D., Crstann, N., I.Jordan, M., Noble, W.S.: A statstcal framewor for genomc data fuson. Bonformatcs 20 (2004) 9. Crstann, N., Shawe-Taylor, J., Elsseeff, A.: On ernel-target algnment. Journal of Machne Learnng Research (2002) 10. Chapelle, O., Vapn, V., Bousquet, O., Muherjee, S.: Choosng multple parameters for support vector machnes. Machne Learnng 46(1-3) (2002) Crammer, K., Keshet, J., Snger, Y.: Kernel desgn usng boostng. In: Advances n Neural Informaton Processng Systems 14, Cambrdge, MA, MIT Press (2002) 12. Bousquet, O., Herrmann, D.: On the complexty of learnng the ernel matrx. In: Advances n Neural Informaton Processng Systems 14, Cambrdge, MA, MIT Press (2003) 13. Crstann, N., Shawe-Taylor, J.: An ntroducton to Support Vector Machnes. Cambrdge Unversty Press (2000) 14. Vapn, V.: Statstcal learnng theory. Wley Interscence (1998) 15. Bonnans, J., A.Shapro.: Optmzaton problems wth perturbaton: A guded tour. SIAM Revew 40(2) (1998) Kalouss, A., Prados, J., Hlaro, M.: Stablty of feature selecton algorthms: a study on hgh dmensonal spaces. Knowledge and Informaton Systems 12(1) (2007) McNemar, Q.: Note on the samplng error of the dfference between correlated proportons or percentages. Psychometra 12 (1947) Kalouss, A., Theohars, T.: Noemon: Desgn, mplementaton and performance results for an ntellgent assstant for classfer selecton. Intellgent Data Analyss Journal 3 (1999) Leo Lbert, N.M., ed.: Global Optmzaton - From Theory to Implementaton. Sprnger (2006) 20. R. Collobert, J.W., Bottou., L.: Tradng convexty for scalablty. Proceedngs of the 23th Conference on Machne Learnng (2006.) 21. Stephen Boyd, L.V., ed.: Convex optmzaton. Cambrge Unversty Press (2004)

14 14 Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro Appendx Proof of Inequalty 7. If K(x,x ) s the ernel functon assocated wth the Φ(x) mappng then the computaton of the radus n the dual form s gven n [1]: max β β j R 2 = s.t. β K(x,x ) β = 1, β 0 β β j K(x,x j ) (18) j If β s the optmal soluton of (18) when K = K µ = 1 µ K, and ˆβ s the optmal soluton of (18) when K = K,.e. : Rµ 2 = µ ( β K (x,x ) R 2 = =1 βˆ K (x,x ),j=1 β βj K (x,x j )),j=1 βˆ ˆ β jk (x,x j ) then β K (x,x ) βˆ K (x,x ) Therefore: R 2 µ =1 µ R 2 β β j K (x,x j ),j=1,j=1 βˆ ˆ β j K (x,x j ) Proof of convexty of R-MKL (Eq.13) To prove that 13 s convex, t s enough to show that functons x2 µ, where x R, µ R+ ξ, and P 2 M, where ξ α µ R, µ, α R + are convex. The frst s quadratc-over-lnear functon whch s convex. The second s convex because ts epgraph s a convex set [21].

Margin and Radius Based Multiple Kernel Learning

Margin and Radius Based Multiple Kernel Learning Margn and Radus Based Multple Kernel Learnng Huyen Do, Alexandros Kalouss, Adam Woznca, and Melane Hlaro Unversty of Geneva, Computer Scence Department 7, route de Drze, Battelle batment A, 1227 Carouge,

More information

Feature weighting using margin and radius based error bound optimization in SVMs

Feature weighting using margin and radius based error bound optimization in SVMs Feature weghtng usng margn and radus based error bound optmzaton n SVMs Huyen Do and Alexandros Kalouss and Melane Hlaro Unversty of Geneva, Computer Scence Department, 7 Route De Drze, 1227, Carouge,

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Bounds on the Generalization Performance of Kernel Machines Ensembles

Bounds on the Generalization Performance of Kernel Machines Ensembles Bounds on the Generalzaton Performance of Kernel Machnes Ensembles Theodoros Evgenou theos@a.mt.edu Lus Perez-Breva lpbreva@a.mt.edu Massmlano Pontl pontl@a.mt.edu Tomaso Poggo tp@a.mt.edu Center for Bologcal

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Evaluation of simple performance measures for tuning SVM hyperparameters

Evaluation of simple performance measures for tuning SVM hyperparameters Evaluaton of smple performance measures for tunng SVM hyperparameters Kabo Duan, S Sathya Keerth, Aun Neow Poo Department of Mechancal Engneerng, Natonal Unversty of Sngapore, 0 Kent Rdge Crescent, 960,

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem. prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning CS9 Problem Set #3 Solutons CS 9, Publc Course Problem Set #3 Solutons: Learnng Theory and Unsupervsed Learnng. Unform convergence and Model Selecton In ths problem, we wll prove a bound on the error of

More information

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012 Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES Proceedngs of the Fourth Internatonal Conference on Machne Learnng and Cybernetcs, Guangzhou, 8- August 005 FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES DING-ZHOU CAO, SU-LIN PANG, YUAN-HUAI

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Sparse Gaussian Processes Using Backward Elimination

Sparse Gaussian Processes Using Backward Elimination Sparse Gaussan Processes Usng Backward Elmnaton Lefeng Bo, Lng Wang, and Lcheng Jao Insttute of Intellgent Informaton Processng and Natonal Key Laboratory for Radar Sgnal Processng, Xdan Unversty, X an

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces

More information