Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Size: px
Start display at page:

Download "Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING"

Transcription

1 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory MACHINE LEARNING Vasant Honavar Artfcal Intellgence Research Laboratory Department of Computer Scence Bonformatcs and Computatonal Bology Program Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty Copyrght Vasant Honavar, 2006.

2 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The Generalzaton Problem Capacty of the machne ablty to learn any tranng set wthout error related to VC dmenson Excellent memory s not an asset when t comes to learnng from lmted data A machne wth too much capacty s lke a botanst wth a photographc memory who, when presented wth a new tree, concludes that t s not a tree because t has a dfferent number of leaves from anythng she has seen before; a machne wth too lttle capacty s lke the botanst s lazy brother, who declares that f t s green, t s a tree C. Burges Copyrght Vasant Honavar,

3 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory A Lttle Learnng Theory Suppose: We are gven l observatons (,). x y Tran and test ponts drawn randomly (..d) from some unknown probablty dstrbuton D(x,y) The machne learns the mappng x y hypothess h( x, α). A partcular choce of and outputs a α generates traned machne The expectaton of the test error or expected rsk s R( α) = y h(, α) dd(, y) () 2 x x Copyrght Vasant Honavar,

4 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory A Bound on the Generalzaton Performance The emprcal rsk s: l Remp ( α) = y h( x, α). (2) 2l = Choose some δ such that 0 < δ <. Wth probablty δ the followng bound rsk bound of h(x,a) n dstrbuton D holds (Vapnk,995): d(log(2 l/ d) + ) log( δ / 4) R( α) Remp ( α) + (3) l d 0 where s called VC dmenson s a measure of capacty of machne. Copyrght Vasant Honavar,

5 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory A Bound on the Generalzaton Performance The second term n the rght-hand sde s called VC confdence. Three key ponts about the actual rsk bound: It s ndependent of D(x,y) It s usually not possble to compute the left hand sde. If we know d, we can compute the rght hand sde. The rsk bound gves us a way to compare learnng machnes! Copyrght Vasant Honavar,

6 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Copyrght Vasant Honavar,

7 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The VC Dmenson Defnton: the VC dmenson of a set of functons H = {(, h x α)} s d f and only f there exsts a set of ponts { } x d = such that these ponts can be labeled n all 2 d possble confguratons, and for each labelng, a member of set H can be found whch correctly assgns those labels, but that no set satsfyng ths property. { } x q = exsts where q > d Copyrght Vasant Honavar,

8 Iowa State Unversty The VC Dmenson Department of Computer Scence Artfcal Intellgence Research Laboratory VC dmenson of H s sze of largest subset of X shattered by H. VC dmenson measures the capacty of a set H of hypotheses (functons). If for any number N, t s possble to fnd N ponts x,.., xn that can be separated n all the 2 N possble ways, we wll say that the VC-dmenson of the set s nfnte Copyrght Vasant Honavar,

9 Iowa State Unversty The VC Dmenson Example Department of Computer Scence Artfcal Intellgence Research Laboratory Suppose that the data lve n 2 R space, and the set {(, h x α)} conssts of orented straght lnes, (lnear dscrmnants). I It s possble to fnd three ponts that can be shattered by ths set of functons It s not possble to fnd four. 2 So the VC dmenson of the set of lnear dscrmnants n R s three. Copyrght Vasant Honavar,

10 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The VC Dmenson of Hyperplanes Theorem Consder some set of m ponts n n R. Choose any one of the ponts as orgn. Then the m ponts can be shattered by orented hyperplanes f and only f the poston vectors of the remanng ponts are lnearly ndependent. Corollary: The VC dmenson of the set of orented n hyperplanes n R s n+, snce we can always choose n+ ponts, and then choose one of the ponts as orgn, such that the poston vectors of the remanng ponts are lnearly ndependent, but can never choose n+2 ponts Copyrght Vasant Honavar,

11 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The VC Dmenson VC dmenson can be nfnte even when the number of parameters of the set {(, h x α)} of hypothess functons s low. Example: h( x, α) sgn(sn( αx)), x, α R For any nteger l wth any labels y,..., yl, y {,} we can fnd l ponts x,..., xl and parameter a such that those ponts can be shattered by h( x, α) Those ponts are: x = 0, =,..., l. and parameter a s: α l ( y )0 π( ) 2 = + = Copyrght Vasant Honavar, 2006.

12 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Vapnk-Chervonenks (VC) Dmenson Let H be a hypothess class over an nstance space X. Both H and X may be nfnte. We need a way to descrbe the behavor of H on a fnte set of ponts S X. { X X } S =..., 2 X m For any concept class H over X, and any S X, Π H ( S) = { h S : h H} Equvalently, wth a lttle abuse of notaton, we can wrte H ( S ) = {( h( X )... h( X )) h H} Π : m Π H (S) s the set of all dchotomes or behavors on S that are nduced or realzed by H Copyrght Vasant Honavar,

13 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Vapnk-Chervonenks (VC) Dmenson ( ) { } m If Π S = 0, H where m, or equvalently, we say that S s shattered by H. m S = Π ( ) = H S 2 A set S of nstances s sad to be shattered by a hypothess class H f and only f for every dchotomy of S, there exsts a hypothess n H that s consstent wth the dchotomy. Copyrght Vasant Honavar,

14 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory VC Dmenson of a hypothess class Defnton: The VC-dmenson V(H), of a hypothess class H defned over an nstance space X s the cardnalty d of the largest subset of X that s shattered by H. If arbtrarly large fnte subsets of X can be shattered by H, V(H)= How can we show that V(H) s at least d? Fnd a set of cardnalty at least d that s shattered by H. How can we show that V(H) = d? Show that V(H) s at least d and no set of cardnalty (d+) can be shattered by H. Copyrght Vasant Honavar,

15 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory VC Dmenson of a Hypothess Class - Examples Example: Let the nstance space X be the 2-dmensonal Eucldan space. Let the hypothess space H be the set of axs parallel rectangles n the plane. V(H)=4 (there exsts a set of 4 ponts that can be shattered by a set of axs parallel rectangles but no set of 5 ponts can be shattered by H). Copyrght Vasant Honavar,

16 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Copyrght Vasant Honavar, Some Useful Propertes of VC Dmenson ( H H ) V ( H ) V ( H ) If ( H = { X h : h H} ) V ( H ) = V ( H ) ( H = H H ) V ( H ) V ( H ) + V ( H ) If H If VH H s a fnte concept class, V( H ) lg H l concepts from H, V Π Φ s formed by a unon or ntersecton of l d = d, Π H 2 H 2 ( m) Φ ( m) = max d ( m) ( H ) Proof: Left as an exercse l where 2 = O( V ( H ) l lgl) { Π ( S) : S = m}, H m d ( m) = 2 f m d and Φ ( m) = O( m ) f m < d d 2 + 6

17 Iowa State Unversty The VC Dmenson Department of Computer Scence Artfcal Intellgence Research Laboratory Defnton: the VC dmenson of a set of functons H = {(, h x α)} s d f and only f there exsts a set of ponts { x } d = such that these ponts can be labeled n all 2d possble confguratons, and for each labelng, a member of set H can be found whch correctly assgns those labels, but that no set x exsts where q > d satsfyng ths property. { } d = Copyrght Vasant Honavar,

18 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Rsk Bound What s VC dmenson and emprcal rsk of the nearest neghbor classfer? Any number of ponts, labeled arbtrarly, can be shattered by a nearest-neghbor classfer, thus d = and emprcal rsk =0. So the bound provde no useful nformaton n ths case. Copyrght Vasant Honavar,

19 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The Generalzaton Problem Capacty of the machne ablty to learn any tranng set wthout error related to VC dmenson Excellent memory s not an asset when t comes to learnng from lmted data A machne wth too much capacty s lke a botanst wth a photographc memory who, when presented wth a new tree, concludes that t s not a tree because t has a dfferent number of leaves from anythng she has seen before; a machne wth too lttle capacty s lke the botanst s lazy brother, who declares that f t s green, t s a tree C. Burges Copyrght Vasant Honavar,

20 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Mnmzng the Bound by Mnmzng d d(log(2 l/ d) + ) log( δ / 4) R( α) Remp ( α) + (3) l VC confdence (second term n (3)) dependence on d/l gven 95% confdence level ( δ = 0.05 ) and assumng tranng sample of sze One should choose that learnng machne whose set of functons has mnmal d For d/l>0.37 (for δ = 0.05 and l=0000) VC confdence >. Thus for hgher d/l the bound s not tght. Copyrght Vasant Honavar,

21 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Bounds on Error of Classfcaton Vapnk proved that the error ε of of classfcaton functon h for separable data sets s d ε = O l where d s the VC dmenson of the hypothess class and l s the number of tranng examples Ths means the error depends on the VC dmenson of the hypothess class beng searched, and Copyrght Vasant Honavar,

22 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Structural Rsk Mnmzaton Fndng a learnng machne wth the mnmum upper bound on the actual rsk leads us to a method of choosng an optmal machne for a gven task. Ths s the essental dea of the structural rsk mnmzaton (SRM). Let H H2 H3... be a sequence of nested subsets of hypotheses whose VC dmensons satsfy d < d 2 < d 3 < SRM then conssts of fndng that subset of functons whch mnmzes the upper bound on the actual rsk. Ths can be done by tranng a seres of machnes, one for each subset, where for a gven subset the goal of tranng s to mnmze the emprcal rsk. One then takes that traned machne n the seres whose sum of emprcal rsk and VC confdence s mnmal. Copyrght Vasant Honavar,

23 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Bounds on Error of Classfcaton Margn based bound ε L = O l γ 2 L = max p X p γ = mn yf ( x) w f ( x) = w, x + b Important nsght Error of the classfer traned on a separable data set s nversely proportonal to ts margn, and s ndependent of the dmensonalty of the nput space! Copyrght Vasant Honavar,

24 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmal Margn Classfer The bounds on error of classfcaton suggest the possblty of mprovng generalzaton by maxmzng the margn Mnmze the rsk of overfttng by choosng the maxmal margn hyperplane n feature space SVMs control capacty by ncreasng the margn, not by reducng the number of features Copyrght Vasant Honavar,

25 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Improvng generalzaton requres controllng capacty The emprcal rsk s: l Remp ( α) = y h( x, α). (2) 2l = Choose some δ such that 0 < δ <. Wth probablty δ the followng bound rsk bound of h(x,a) n dstrbuton D holds (Vapnk,995): d(log(2 l/ d) + ) log( δ / 4) R( α) Remp ( α) + (3) l d 0 where s called VC dmenson s a measure of capacty of machne. Copyrght Vasant Honavar,

26 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Mnmzng the Rsk Bound by Mnmzng d d / l Copyrght Vasant Honavar,

27 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Mnmzng the Bound by Mnmzng d VC confdence (second term n (3)) dependence on d/l gven 95% confdence level ( δ = 0.05 ) and assumng tranng sample of sze One should choose that learnng machne whose set of functons has mnmal d For d/l>0.37 (for δ = 0.05 and l=0000) VC confdence >. Thus for hgher d/l the bound s not tght. Copyrght Vasant Honavar,

28 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Margn Lnear separaton of the nput space w f ( x) = w, x + b hx ( ) = sgn( f( x)) b / w Copyrght Vasant Honavar,

29 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Functonal and Geometrc Margn The functonal margn of a lnear dscrmnant (w,b) w.r.t. a labeled pattern d ( x, y ) R {,} s defned as If the functonal margn s negatve, then the pattern s ncorrectly classfed, f t s postve then the classfer predcts the correct label. The larger γ the further away x s from the dscrmnant. Ths s made more precse n the noton of the geometrc margn γ whch measures the γ Eucldean dstance of a pont w from the decson boundary. Copyrght Vasant Honavar, γ y ( wx, + b) 29

30 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Geometrc Margn X S + γ j γ X j S The geometrc margn of two ponts Copyrght Vasant Honavar,

31 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Geometrc Margn γ j X j S γ (,) X S + W X Y γ γ b = ( = Example ) = = ( ) = = = ( Y )() ( x + (). x2 ) ()( + ) = γ W = 2 Copyrght Vasant Honavar,

32 Iowa State Unversty Margn of a tranng set Department of Computer Scence Artfcal Intellgence Research Laboratory γ The functonal margn of a tranng set γ = mn γ The geometrc margn of a tranng set γ = mn γ Copyrght Vasant Honavar,

33 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmum Margn Separatng Hyperplane γ mn γ s called the (functonal) margn of (w,b) w.r.t. the data set S={(x,y )}. The margn of a tranng set S s the maxmum geometrc margn over all hyperplanes. A hyperplane realzng ths maxmum s a maxmal margn hyperplane. Copyrght Vasant Honavar, Maxmal Margn Hyperplane 33

34 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmal Margn Classfer The bounds on error of classfcaton suggest the possblty of mprovng generalzaton by maxmzng the margn Mnmze the rsk of overfttng by choosng the maxmal margn hyperplane n feature space SVMs control capacty by ncreasng the margn, not by reducng the number of features Copyrght Vasant Honavar,

35 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmzng Margn Mnmzng W Defnton of hyperplane (w,b) does not change f we rescale t to (σw, σb), for σ>0. Functonal margn depends on scalng, but geometrc margn γ does not. If we fx (by rescalng) the functonal margn to, the geometrc margn wll be equal / w. Then, we canmaxmze the margn by mnmzng the norm w. Copyrght Vasant Honavar,

36 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmzng Margn Mnmzng W Dstance between the two convex hulls + wx, + b=+ o x x γ o o o o x x o x wx, + b= + w,( x x ) = 2 w + 2,( x x ) = w w Copyrght Vasant Honavar,

37 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Learnng as optmzaton Mnmze ww, (2) subject to: y ( w, x + b) (3) Copyrght Vasant Honavar,

38 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Consder f ( x) ( x) Dgresson Mnmzng / Maxmzng Functons s convex over somesub - doman D D the chord f the graph of f, a functon of a scalar varable x wth doman D jonng the ponts ( x) f ( X ) and f ( X ) 2 x f X, X les above 2 x D, f X ( x) a has a local mnmumat x = X such that x U, f ( x) > f ( X ) a a f neghborhoodu D x around We say that lm x a x such that x-a < δ f ( x) = A f, for any ε > 0, δ > 0 such that f ( x) A < ε Copyrght Vasant Honavar,

39 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory We say that f lm Mnmzng/Maxmzng Functons f ( x) ( x) { } { ( )} lm f = lm lm f x ε 0 x a+ ε 0 s contnuous at x = x a a The dervatve of df dx = lm Δx 0 f the functon ( x + Δx) f ( x) ( Δx) f ( x) s defned as df dx x= X 0 = 0 f X 0 s a local maxmum or a local mnmum Copyrght Vasant Honavar,

40 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory ( u + v) d dx d dx u d v dx ( uv) Mnmzng/Maxmzng Functons du dv = + dx dx dv du = u + v dx dx du v u dx = 2 v dv dx Copyrght Vasant Honavar,

41 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory If, ( x) ( x) Taylor Seres Approxmaton of Functons Taylor seres approxmaton of df dx f f f f s dfferentable ( x) 2 n d f d df d f =,... exst at x X 2 = 0 and n dx dx dx dx s contnuous n the neghborhood of x = X ( x) = f ( X ) + ( x X ) ( x X ) 0 ( x) f ( X ) + ( x X ) 0 df dx df dx x= X x= X 0 f.e., ts dervatves 0 0! 0 n df dx n n 0, then x= X 0 0 n Copyrght Vasant Honavar,

42 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Let f f x Chan rule ( X) = f ( x x, x,... x ) 0, s obtaned by treatng all 2 n x j as constant. Chan rule Then k ( ) Let z = φ u...u Let u = f ( x x... x ) 0, z x k = m m n z u u x = k Copyrght Vasant Honavar,

43 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Taylor Seres Approxmaton of Multvarate Functons Let f 0 Then ( X) = f ( x x, x,... x ) dfferentable X f = ( x x, x,... x ) 00, 0 ( X) f ( X ) + ( x x ) 0 0, 20 and n = 0 2 contnuous n0 f x n X= X be 0 at 0 Copyrght Vasant Honavar,

44 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Mnmzng / Maxmzng Multvarate Functons To fnd X * that mnmzes ( X) n the drecton of the negatve gradent of f, we change current guess X f ( X) evaluated at C X C X C X C f η x 0 f, x f... x n X= X C (why?) for small (deally nfntesmally small) Copyrght Vasant Honavar,

45 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Mnmzng / Maxmzng Functons Gradent descent / ascent s guaranteed to fnd the mnmum / maxmum when the functon has a sngle mnmum / maxmum x f (x, x 2 ) X C= (x C, x 2C ) X * x 2 Copyrght Vasant Honavar,

46 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Copyrght Vasant Honavar, Constraned Optmzaton Prmal optmzaton problem Gven functons f, g, = k; h j j=..m; defned on a doman Ω R n, mnmze subject to Shorthand f g h ( w) ( w) ( w) j 0 h = 0 g w Ω =... k, j =... m { { objectvefuncton nequaltyconstrants { equalty constrants ( w) 0 denotes g ( w) 0 =...k ( w) = 0 denotes h ( w) 0 j =...m Feasble regon F = { w Ω : g( w) 0, h( w) = 0} j 46

47 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Optmzaton problems Lnear program objectve functon as well as equalty and nequalty constrants are lnear Quadratc program objectve functon s quadratc, and the equalty and nequalty constrants are lnear Inequalty constrants g (w) 0 can be actve.e. g (w) = 0 or nactve.e. g (w) < 0. Inequalty constrants are often transformed nto equalty constrants usng slack varables g (w) 0 g (w) +ξ = 0 wth ξ 0 We wll be nterested prmarly n convex optmzaton problems Copyrght Vasant Honavar,

48 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Convex optmzaton problem If functon f s convex, any local mnmum of an unconstraned optmzaton problem wth objectve functon * f s also a global mnmum, snce for any * u w f ( w ) f ( u) A set n Ω R s called convex f, wu, Ω and for any θ (0,), the pont ( θ w+ ( θ ) u) Ω A convex optmzaton problem s one n whch the set, the objectve functon and all the constrants are convex * w Ω Copyrght Vasant Honavar,

49 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Lagrangan Theory Gven an optmzaton problem wth an objectve functon f (w) and equalty constrants h j (w) = 0 j =..m, we defne a Lagrangan as m L ( w,β) = f ( w) + ( w) j= β j h j where β j are called the Lagrange multplers. The necessary Condtons for w * to be mnmum of f (w) subject to the Constrants h j (w) = 0 j =..m s gven by L * * * * ( w, β ) L( w, β ) 0, = 0 w = β The condton s suffcent f L(w,β * ) s a convex functon of w Copyrght Vasant Honavar,

50 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Lagrangan theory example Fnd the lengths u, v, w of sdes of the box that has the largest volume for a gven surface area c mnmze c subject to wu + uv + vw = 2 c L = uvw + β wu + uv + vw 2 L L L = 0 = uv + β(u + v); = 0 = vw + β(v + w); w u v βv -uvw ( w u) = 0 and β( u v) 0 u = v = w = c 6 = 0 = wu + β(u + w) ; Copyrght Vasant Honavar,

51 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Lagrangan Optmzaton - Example Copyrght Vasant Honavar, The entropy of a probablty dstrbuton p=(p...p n ) over a fnte set {, 2,...n } s defned as n ( p) = p log p H 2 = The maxmum entropy dstrbuton can be found by mnmzng H(p) subject to the constrants n = p p = 0 5

52 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Copyrght Vasant Honavar, Generalzed Lagrangan Theory Gven an optmzaton problem wth doman Ω R n where f s convex, and g and h j are affne, we can defne the generalzed Lagrangan functon as: L mnmze subject to f g h ( w) ( w) ( w) j 0 = 0 k ( w, α, β) = f ( w) + α g ( w) + β h ( w) = w Ω =... k, j =... m m j= { { nequalty constrants { equalty constrants j objectve functon An affne functon s a lnear functon plus a translaton F(x) s affne f F(x)=G(x)+b where G(x) s a lnear functon of x and b s a constant j 52

53 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Copyrght Vasant Honavar, Generalzed Lagrangan Theory Gven an optmzaton problem wth doman Ω R n mnmze f ( w) w Ω { objectve functon subject to g ( w) 0 =... k, { nequalty constrants h ( w) = 0 j =... m { equalty constrants where f s convex, and g and h j are affne. The necessary and Suffcent condtons for w* to be an optmum are the exstence of α* and β* such that L j * *, * * *, * ( w, α β ) L( w, α β ) w = 0, β = 0, * * * ( w ) = 0; g ( w ) 0 ; α 0; = k * α g... 53

54 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Dual optmzaton problem A prmal problem can be transformed nto a dual by smply settng to zero, the dervatves of the Lagrangan wth respect to the prmal varables and substtutng the results back nto the Lagrangan to remove dependence on the prmal varables, resultng n θ ( α, β ) = nf L ( w, α, β ) w Ω whch contans only dual varables and needs to be maxmzed under smpler constrants The nfmum of a set S of real numbers s denoted by nf(s) and s defned to be the largest real number that s smaller than or equal to every number n S. If no such number exsts then nf(s) =. Copyrght Vasant Honavar,

55 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmum Margn Hyperplane The problem of fndng the mnmal margn hyperplane s a constraned optmzaton problem Use Lagrange theory (extended by Karush, Kuhn, and Tucker KKT) Lagrangan: Lp( w) = w, w α[ y( w, x 2 + b) ] (4) α 0 Copyrght Vasant Honavar,

56 Iowa State Unversty Copyrght Vasant Honavar, From Prmal to Dual w = α yx (7) Department of Computer Scence Artfcal Intellgence Research Laboratory Mnmze L p (w) wth respect to (w,b) requrng that dervatves of L p (w) wth respect to α all vansh, subject to the constrants α 0 Lp Dfferentatng L p (w) : = 0, w (5) Lp = 0 b (6) α y = 0 (8) Substtutng ths equalty constrants back nto L p (w) we obtan a dual problem. 56

57 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Maxmze: subject to The Dual Problem L = α αα y y xx (9) D j j j 2, j Dualty permts the use of kernels! The value of b does not appear n the dual problem and so b needs to be found from prmal constrants α 0 (0) α y = 0 () b = max ( w x ) + mn ( w x ) y = y = 2 Copyrght Vasant Honavar,

58 Iowa State Unversty KKT condtons Department of Computer Scence Artfcal Intellgence Research Laboratory Karush-Kuhn-Tucker Condtons w ν L = w α y x = 0 (2) α [ y ( w,x + b) ] = 0 (6) p ν ν L b = α y = 0 (3) p y ( w, x + b) 0 (4) α 0 (5) Solvng the SVM problem s equvalent to fndng a soluton to the KKT condtons Copyrght Vasant Honavar,

59 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Karush-Kuhn-Tucker Condtons for SVM The KKT condtons state that optmal solutons α ( w, b) must satsfy α [ y ( w,x + b) ] = 0 (6) Only the tranng samples x for whch the functonal margn = can have nonzero α. They are called Support Vectors. The optmal hyperplane can be expressed n the dual representaton n terms of ths subset of tranng samples the support vectors l f ( x, α, b) = yα x x + b = yα x x + b = sv Copyrght Vasant Honavar,

60 60 Copyrght Vasant Honavar, Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory x j SV = = SV x w α γ ( ),,, = + = SV j j j j b y y b f y x x α x α ( ) = = = = = SV SV j j j j SV SV j j j j j j l j b y y y y y, α α α α α α,,, x x x x w w So Because for

61 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Support Vector Machnes Yeld Sparse Solutons γ Support vectors from postve (yellow) class are shown n green and the negatve class are shown n purple Copyrght Vasant Honavar,

62 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Summary of Maxmal margn classfer Good generalzaton when the tranng data s nose free and separable n the kernel-nduced feature space Excessvely large Lagrange multplers often sgnal outlers data ponts that are most dffcult to classfy SVM can be used for dentfyng outlers Focuses on boundary ponts f they happen to be mslabeled (nosy tranng data), the result s a terrble classfer Nosy tranng data often means a non separable tranng set (unless we use hghly nonlnear kernel transformatons whch n turn ncrease the lkelhood of over fttng despte maxmzng the margn) Copyrght Vasant Honavar,

63 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Non-Separable Case In the case of non-separable data n feature space, the objectve functon grows arbtrarly large. Soluton: Relax the constrant that all tranng data be correctly classfed but only when necessary to do so Cortes and Vapnk, 995 ntroduced slack varables n the constrants: ξ, =,.., l ξ = max(0, γ y ( w, x + b)) (7) Copyrght Vasant Honavar,

64 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Non-Separable Case ξ = max(0, γ y ( w, x + b)). Generalzaton error can be shown to be γ ξ j ε l 2 ( R + ξ ) 2 γ 2 (8) ξ Copyrght Vasant Honavar,

65 Iowa State Unversty Non-Separable Case Department of Computer Scence Artfcal Intellgence Research Laboratory For error to occur, the correspondng must exceed (whch was chosen to be unty) So ξ s an upper bound on the number of tranng 2 errors. So the objectve functon s changed from w / 2 2 k to w / 2 + C ξ where C s a parameter. For any postve k the result s a convex problem ( for k =2 or t s also a quadratc programmng problem). ξ γ Copyrght Vasant Honavar,

66 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The Soft-Margn Classfer Prmal problem for k=2 s to mnmze: 2 ww, + C ξ (20) 2 Subject to: y ( w, x + b) ξ (2) Prmal problem for k= s to mnmze: Subject to: 2 ww, + C ξ (22 ) y ( w, x + b) ξ (23) For k = ξ ξ 0 do not appear n the dual problem Copyrght Vasant Honavar,

67 Iowa State Unversty The Soft-Margn Classfers Department of Computer Scence Artfcal Intellgence Research Laboratory Lagrangan for -norm soft margn problem s l l l Lp( w, b, ξ, α, r) = w, w + C ξ α[ y( x w + b) + ξ] rξ (24 ) 2 wth α 0 and r 0 Dfferentatng w.r.t. w, ξ and b l Lp = w αyx = 0 (25) w = Lp = C α r = 0 (26) ξ l Lp = αy = 0 (27) b Copyrght Vasant Honavar,

68 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory The KKT condtons are Soft Margn-Dual Lagrangan Dual problem L = α αα y y x x (28) D j j j 2, j 0 α C (29) α y = 0 (30) α [ y ( w,x + b) + ξ ] = 0 (3) ξ ( α ) 0 C = (32) The slack varables are nonzero only when α = C Copyrght Vasant Honavar,

69 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Implementaton Technques Maxmzng a quadratc functon, subject to a lnear equalty and nequalty constrants W ( α) = α αα yyk( x, x ) 0 α C α y W ( α) α = 0 j j j 2, j = y α y K( x, x ) j j j j Copyrght Vasant Honavar,

70 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory On-lne algorthm for the -norm soft margn (bas b=0) Gven tranng set D α 0 η = K x repeat, x for = to l ( ) α α + η ( y α y K( x, x )) j j j j f α < 0 then α 0 else f α > C then α C end for untl stoppng crteron satsfed return α Copyrght Vasant Honavar,

71 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Implementaton Technques Use QP packages (MINOS, LOQO, quadprog from MATLAB optmzaton toolbox). They are not onlne and requre that the data are held n memory n the form of kernel matrx Stochastc Gradent Ascent. Sequentally update weght at the tme. So t s onlne. Gves excellent approxmaton n most cases ˆ α α + y α jyjk( x, xj) K( x, x) j Copyrght Vasant Honavar,

72 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Chunkng and Decomposton Gven tranng set D α 0 Select an arbtrary workng set ˆD D repeat solve optmzaton problem on ˆD select new workng set from data not satsfyng KKT condtons untl stoppng crteron satsfed return α Copyrght Vasant Honavar,

73 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Sequental Mnmal Optmzaton S.M.O. At each step SMO: optmze two weghts α, α j smultaneously n order not to volate the lnear constrant α y = 0 Optmzaton of the two weghts s performed analytcally Realzes gradent dssent wthout leavng the lnear constrant (J.Platt) Onlne versons exst (L, Long; Gentle) Copyrght Vasant Honavar,

74 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory SVM Implementatons SVMLght one of the frst practcal mplementatons of SVM (Platt) Matlab toolboxes for SVM LIBSVM one of the best SVM mplementatons s by Chh- Chung Chang and Chh-Jen Ln of the Natonal Unversty of Sngapore WLSVM LIBSVM ntegrated wth WEKA machne learnng toolbox Yasser El-Manzalawy of the ISU AI Lab Copyrght Vasant Honavar,

75 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Example Suppose we have 5 D data ponts x =, x 2 =2, x 3 =4, x 4 =5, x 5 =6, wth, 2, 6 as class and 4, 5 as class 2 y =, y 2 =, y 3 =-, y 4 =-, y 5 = We use the polynomal kernel of degree 2 K(x,y) = (xy+) 2 C s set to 00 We frst fnd α (=,, 5) by Copyrght Vasant Honavar,

76 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Example By usng a QP solver, we get α =0, α 2 =2.5, α 3 =0, α 4 =7.333, α 5 =4.833 Note that the constrants are ndeed satsfed The support vectors are {x 2 =2, x 4 =5, x 5 =6} The dscrmnant functon s b s recovered by solvng f(2)= or by f(5)=- or by f(6)=, as x 2, x 4, x 5 le on and all gve b=9 Copyrght Vasant Honavar,

77 Iowa State Unversty Example Department of Computer Scence Artfcal Intellgence Research Laboratory Value of dscrmnant functon class class 2 class Copyrght Vasant Honavar,

78 Iowa State Unversty Copyrght Vasant Honavar, Why does SVM Work? Department of Computer Scence Artfcal Intellgence Research Laboratory The feature space s often very hgh dmensonal. Why don t we have the curse of dmensonalty? A classfer n a hgh-dmensonal space has many parameters and s hard to estmate Vapnk argues that the fundamental problem s not the number of parameters to be estmated. Rather, the problem s the capacty of a classfer Typcally, a classfer wth many parameters s very flexble, but there are also exceptons Let x =0 where ranges from to n. The classfer can classfy all x correctly for all possble combnaton of class labels on x Ths -parameter classfer s very flexble 78

79 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Why does SVM work? Vapnk argues that the capacty of a classfer should not be characterzed by the number of parameters, but by the capacty of a classfer Ths s formalzed by the VC-dmenson of a classfer The mnmzaton of w 2 subject to the condton that the geometrc margn = has the effect of restrctng the VCdmenson of the classfer n the feature space The SVM performs structural rsk mnmzaton: the emprcal rsk (tranng error), plus a term related to the generalzaton ablty of the classfer, s mnmzed SVM loss functon s analogous to rdge regresson. The term ½ w 2 shrnks the parameters towards zero to avod overfttng Copyrght Vasant Honavar,

80 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Choosng the Kernel Functon Probably the most trcky part of usng SVM The kernel functon should maxmze the smlarty among nstances wthn a class whle accentuatng the dfferences between classes A varety of kernels have been proposed (dffuson kernel, Fsher kernel, strng kernel, ) for dfferent types of data - In practce, a low degree polynomal kernel or RBF kernel wth a reasonable wdth s a good ntal try for data that lve n a fxed dmensonal nput space Low order Markov kernels and ts relatves are good ones to consder for structured data strngs, mages, etc. Copyrght Vasant Honavar,

81 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Other Aspects of SVM How to use SVM for mult-class classfcaton? One can change the QP formulaton to become multclass More often, multple bnary classfers are combned One can tran multple one-versus-all classfers, or combne multple parwse classfers ntellgently How to nterpret the SVM dscrmnant functon value as probablty? By performng logstc regresson on the SVM output of a set of data that s not used for tranng Some SVM software (lke lbsvm) have these features bultn Copyrght Vasant Honavar,

82 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Recap: How to Use SVM Prepare the data set Select the kernel functon to use Select the parameter of the kernel functon and the value of C --- You can use the values suggested by the SVM software, or you can set apart a valdaton set to determne the values of the parameter Execute the tranng algorthm and obtan the α Unseen data can be classfed usng the α and the support vectors Copyrght Vasant Honavar,

83 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Strengths and Weaknesses of SVM Strengths Tranng s relatvely easy No local optma It scales relatvely well to hgh dmensonal data Tradeoff between classfer complexty and error can be controlled explctly Non-tradtonal data lke strngs and trees can be used as nput to SVM, nstead of feature vectors Weaknesses Need to choose a good kernel functon. Copyrght Vasant Honavar,

84 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Recent developments Better understandng of the relaton between SVM and regularzed dscrmnatve classfers (logstc regresson) Knowledge-based SVM ncorporatng pror knowledge as constrants n SVM optmzaton (Shavlk et al) A vertable zoo of kernel functons Extensons of SVM to mult-class and structured label classfcaton tasks One class SVM (anomaly detecton) Copyrght Vasant Honavar,

85 Iowa State Unversty Department of Computer Scence Artfcal Intellgence Research Laboratory Representatve Applcatons of SVM Handwrtten letter classfcaton Classfcaton of tssues healthy versus cancerous - based on gene expresson data Text classfcaton Image classfcaton e.g., face recognton Proten functon classfcaton Proten structure classfcaton Proten sub-cellular localzaton Copyrght Vasant Honavar,

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012 Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

FMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu FMA901F: Machne Learnng Lecture 5: Support Vector Machnes Crstan Smnchsescu Back to Bnary Classfcaton Setup We are gven a fnte, possbly nosy, set of tranng data:,, 1,..,. Each nput s pared wth a bnary

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Kristin P. Bennett. Rensselaer Polytechnic Institute

Kristin P. Bennett. Rensselaer Polytechnic Institute Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution. Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Lecture 6: Support Vector Machines

Lecture 6: Support Vector Machines Lecture 6: Support Vector Machnes Marna Melă mmp@stat.washngton.edu Department of Statstcs Unversty of Washngton November, 2018 Lnear SVM s The margn and the expected classfcaton error Maxmum Margn Lnear

More information

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont. UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 10: Classfca8on wth Support Vector Machne (cont. ) Yanjun Q / Jane Unversty of Vrgna Department of Computer Scence 9/6/14

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Intro to Visual Recognition

Intro to Visual Recognition CS 2770: Computer Vson Intro to Vsual Recognton Prof. Adrana Kovashka Unversty of Pttsburgh February 13, 2018 Plan for today What s recognton? a.k.a. classfcaton, categorzaton Support vector machnes Separable

More information

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them? Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

17 Support Vector Machines

17 Support Vector Machines 17 We now dscuss an nfluental and effectve classfcaton algorthm called (SVMs). In addton to ther successes n many classfcaton problems, SVMs are responsble for ntroducng and/or popularzng several mportant

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Statistical machine learning and its application to neonatal seizure detection

Statistical machine learning and its application to neonatal seizure detection 19/Oct/2009 Statstcal machne learnng and ts applcaton to neonatal sezure detecton Presented by Andry Temko Department of Electrcal and Electronc Engneerng Page 2 of 42 A. Temko, Statstcal Machne Learnng

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

14 Lagrange Multipliers

14 Lagrange Multipliers Lagrange Multplers 14 Lagrange Multplers The Method of Lagrange Multplers s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

15-381: Artificial Intelligence. Regression and cross validation

15-381: Artificial Intelligence. Regression and cross validation 15-381: Artfcal Intellgence Regresson and cross valdaton Where e are Inputs Densty Estmator Probablty Inputs Classfer Predct category Inputs Regressor Predct real no. Today Lnear regresson Gven an nput

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Fisher Linear Discriminant Analysis

Fisher Linear Discriminant Analysis Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

15 Lagrange Multipliers

15 Lagrange Multipliers 15 The Method of s a powerful technque for constraned optmzaton. Whle t has applcatons far beyond machne learnng t was orgnally developed to solve physcs equatons), t s used for several ey dervatons n

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning

CS 229, Public Course Problem Set #3 Solutions: Learning Theory and Unsupervised Learning CS9 Problem Set #3 Solutons CS 9, Publc Course Problem Set #3 Solutons: Learnng Theory and Unsupervsed Learnng. Unform convergence and Model Selecton In ths problem, we wll prove a bound on the error of

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

3.1 ML and Empirical Distribution

3.1 ML and Empirical Distribution 67577 Intro. to Machne Learnng Fall semester, 2008/9 Lecture 3: Maxmum Lkelhood/ Maxmum Entropy Dualty Lecturer: Amnon Shashua Scrbe: Amnon Shashua 1 In the prevous lecture we defned the prncple of Maxmum

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces

More information

UVA CS / Introduc8on to Machine Learning and Data Mining

UVA CS / Introduc8on to Machine Learning and Data Mining UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 11: Classfca8on wth Support Vector Machne (Revew + Prac8cal Gude) Yanjun Q / Jane Unversty of Vrgna Department of Computer

More information

6.854J / J Advanced Algorithms Fall 2008

6.854J / J Advanced Algorithms Fall 2008 MIT OpenCourseWare http://ocw.mt.edu 6.854J / 18.415J Advanced Algorthms Fall 2008 For nformaton about ctng these materals or our Terms of Use, vst: http://ocw.mt.edu/terms. 18.415/6.854 Advanced Algorthms

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

CSCI B609: Foundations of Data Science

CSCI B609: Foundations of Data Science CSCI B609: Foundatons of Data Scence Lecture 13/14: Gradent Descent, Boostng and Learnng from Experts Sldes at http://grgory.us/data-scence-class.html Grgory Yaroslavtsev http://grgory.us Constraned Convex

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

CSE 546 Midterm Exam, Fall 2014(with Solution)

CSE 546 Midterm Exam, Fall 2014(with Solution) CSE 546 Mdterm Exam, Fall 014(wth Soluton) 1. Personal nfo: Name: UW NetID: Student ID:. There should be 14 numbered pages n ths exam (ncludng ths cover sheet). 3. You can use any materal you brought:

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso

Machine Learning & Data Mining CS/CNS/EE 155. Lecture 4: Regularization, Sparsity & Lasso Machne Learnng Data Mnng CS/CS/EE 155 Lecture 4: Regularzaton, Sparsty Lasso 1 Recap: Complete Ppelne S = {(x, y )} Tranng Data f (x, b) = T x b Model Class(es) L(a, b) = (a b) 2 Loss Functon,b L( y, f

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information