CMAC: RECONSIDERING AN OLD NEURAL NETWORK. Gábor Horváth

Size: px
Start display at page:

Download "CMAC: RECONSIDERING AN OLD NEURAL NETWORK. Gábor Horváth"

Transcription

1 CMAC: RECONSIDERING AN OLD NEURAL NEWORK Gábor Horváth Budapest Unversty of echnology and Economcs, Department of Measurement and Informaton Systems Magyar tudósok krt.. Budapest, Hungary H-7. Abstract: Cerebellar Model Artculaton Controller (CMAC has some attractve features: fast learnng capablty and the possblty of effcent dgtal hardware mplementaton. Although CMAC was proposed many years ago, several open questons have been left even for today. Among them the most mportant ones are about ts modellng and generalzaton capabltes. he lmts of ts modellng capablty were addressed n the lterature and recently a detaled analyss of ts generalzaton property was gven. he results show that there are dfferences between the one-dmensonal and the multdmensonal versons of CMAC. he modellng capablty of a multdmensonal network s nferor to that of the one-dmensonal one. hs paper dscusses the reasons of ths dfference and suggests a new kernel-based nterpretaton of CMAC. It shows that a one-dmensonal bnary CMAC can be consdered as an SVM wth second order B-splne kernel functon. Applyng ths approach the paper shows that one-dmensonal and multdmensonal CMACs can be constructed wth smlar modellng capablty. Copyrght 3 IFAC Keywords: Neural Network, Support Vector Machnes, CMAC, Rdge regresson. INRODUCION he paper deals wth CMAC, a specal neural archtecture (Albus, 975. CMAC has some attractve features. he most mportant ones are ts extremely fast learnng capablty (hompson and Kwon, 995 and the specal archtecture that lets effectve dgtal hardware mplementaton possble (Mller et al., 99; Horváth and Deák, 993; Ker, et al., 995. CMAC archtecture was proposed n the mddle of the seventes and t has been mentoned as a real alternatve of ML (Mller et al., 99. he prce of the advantageous propertes s that ts modellng capablty s nferor to that of an ML (Commur et al., 995. hs especally true for multdmensonal case, as a multdmensonal CMAC can approxmate well only a functon belongng to the addtve functon set (Brown, et al., 993. A further defcency of CMAC s that ts generalzaton capablty s also nferor to that of an ML even for one-dmensonal cases. hs drawback was dscussed n (Szabó and Horváth,, where the real reason of ths property was dscovered and a modfed tranng algorthm was proposed for mprovng the generalzaton capablty. However, the other dsadvantage, the lmted modellng capablty n multdmensonal cases cannot be mproved usng the proposed modfcaton. hs paper wll deal wth ths queston. It wll show that the lmted modellng capablty comes from the archtecture of the multdmensonal CMAC. he archtecture can be modfed n such a way that the modfed verson wll have mproved modellng capablty (Lane at al., 99. However, ths modfcaton would ncrease the network complexty. hs ncreased complexty s unacceptable from the vewpont of realzaton f the nput dmenson s (much larger than two. CMAC can be nterpreted as a network where the output s formed as a sum of weghted bass functons. he orgnal, bnary CMAC apples rectangular (frst-order B-splne bass functons of fxed postons.

2 hs paper shows that an alternatve nterpretaton can also be gven, when the network s consdered as an SVM (support vector machne wth second-order B-splne kernel functons, where the postons of the kernel functons depend on the tranng data. hs nterpretaton can be appled for hgher-order CMACs (Chang Chng-san and Ln Chun-Sn, 996, (Ker et al., 995 wth hgher order bass functons (k th order B-splnes. In these cases CMACs correspond to SVMs wth properly chosen hgher order ((k- th order B-splne kernels.. A SHOR OVERVIEW OF HE CMAC CMAC s an assocatve memory type neural network, whch performs two subsequent mappngs. he frst one - whch s a non-lnear mappng - projects an nput space pont u nto an assocaton vector a. he second mappng calculates the output of the network as a scalar product of the assocaton vector a and the weght vector w: a w a + a w+ + w a + +3 w + 3 u a +4 w + u a 4 +5 w + 5 u 3 dscrete nput space Fg.. he mappngs of a CMAC y(u=a(u w ( C=4 u a a y a j a a j+ a j+ j+ 3 w j w j+ w j+ w j+ 3 a w assocaton vector weght vector he assocaton vectors are sparse bnary vectors, whch have only C actve elements, C bts of an assocaton vector are ones and the others are zeros. In ths case scalar products can be mplemented wthout multplcaton; the scalar product s nothng more than the sum of the weghts selected by the actve bts of the assocaton vector. y( u = ( w : a ( u = Σ y CMAC uses quantzed nputs, so the number of the possble dfferent nput data s fnte. here s a oneto-one mappng between the dscrete nput data and the assocaton vectors,.e. each possble nput pont has a unque assocaton vector representaton. Another nterpretaton can also be gven to the CMAC. In ths nterpretaton every bt n the assocaton vector corresponds to a bnary bass functon wth a fnte support of C quantzaton ntervals. hs means that a bt wll be actve f the nput value s wthn the support of the correspondng bass functon. hs support s often called as the receptve feld of the bass functon. An element of the assocaton vector can be consdered as the value of a bass functon for a gven nput, so the output of the bnary bass functon s one f an nput s n ts receptve feld and zero elsewhere: f u s n the receptce feld a ( u = of the th bass functon (3 otherwse he mappng from the nput space nto the assocaton vector should have the followng characterstcs: ( t should map two neghbourng nput ponts nto such assocaton vectors, where only a few elements -.e. few bts - are dfferent. ( as the dstance between two nput ponts grows, the number of the common actve bts n the correspondng assocaton vectors decreases. he nput ponts far enough from each other - further then the neghbourhood determned by the parameter C - should not have any common bts. hs mappng s responsble for the non-lnear property and the generalzaton of the whole system. he two mappngs are mplemented n a two-layer network archtecture. he frst mappng mplements a specal encodng of the quantzed nput data, ths layer s fxed; the tranable elements, the weght values whch can be updated usng the smple LMS rule, are n the second layer. he way of encodng, the postons of the bass functons n the frst layer, determne the generalzaton property of the network. In one-dmensonal cases every quantzaton nterval wll determne a bass functon, so the number of bass functons s approxmately equal to the number of possble dscrete nputs. However, f we follow ths rule n hgher dmensonal cases, the number of bass functons wll grow exponentally wth the dmenson, so the network may become too complex. As every selected bass functon wll be multpled by a weght value, the sze of the weght memory s equal to the total number of bass functons, to the length of the assocaton vector. In hgher dmensonal case the weght memory can be so huge that t cannot be mplemented. o avod ths hgh complexty the number of bass functons must be reduced. In a classcal hgher dmensonal CMAC ths reducton s acheved usng bass functons postoned only at the dagonals of the quantzed nput space as t s shown n Fgure. he shaded regons n Fgure are the receptve felds of dfferent bass functons. As t s shown the bass functons are grouped nto overlays. One overlay contans bass functons wth non-overlappng supports, but the unon of the supports covers the whole nput space. he dfferent overlays have the same structure; they consst of smlar bass functons n shfted postons. Every nput data wll select C bass functons, each of them on a dfferent overlay, so n an overlay one and only one bass functon wll be actve for every nput pont. he postons of the

3 overlays and the bass functons of one overlay can be represented by defnte ponts. onts of subdagonal onts of man dagonal Fg.. he bass functons of a two-dmensonal CMAC In the orgnal Albus scheme the overlay-representng ponts are n the man dagonal of the nput space, whle the bass functon postons are represented by the subdagonal ponts as t s shown n Fgure (black dots. In the orgnal Albus archtecture the number of overlays does not depend on the dmenson of the nput vectors; t s always C. hs means that n hgher dmensonal cases the number of bass functon wll not grow exponentally wth the dmenson. hs s an advantageous property from the pont of vew of mplementaton, however ths reduced number of bass functons s the real reason of the nferor modellng capablty of the multdmensonal CMAC, as reducng the number of bass functons the number of free parameters wll also be reduced (Brown and Harrs, 994. o avod ths unwanted effect we can get help from the approach of support vector machnes (SVMs (Vapnk, 995. It should be mentoned that n hgher-dmensonal cases further complexty reducton s requred. hs reducton s acheved by applyng a compressng new layer (Albus, 975, whch uses Hash codng. However, the effect of hashng can be neglected, when ts features are selected properly (Ellson, 99, so we wll not deal wth ths effect. 3. A BRIEF SUMMARY OF SVMs AND LS-SVMs Support Vector Machnes (SVMs were proposed recently by Vapnk (995 and are based on Statstcal Learnng heory (SL and Structural Rsk Mnmzaton (SRM prncple. SVMs can be constructed for lnear or non-lnear classfcaton tasks as well as for lnear or non-lnear regressons. Here only the regresson verson wll be summarzed. An SVM s constructed usng tranng data { x }, y = u quantzaton ntervals overlappng regons Regons of one overlay smlarly to the classcal neural networks. he goal of an SVM for regresson s to approxmate a (non-lnear functon (Smola et al., 998, where u the qualty of approxmaton s determned usng a loss or cost functon. he loss functon s representng the cost of the devaton from the target output d for each x nput. In most cases the ε nsenstve loss functon (L ε s used, but one can use other (e.g. non lnear loss functons too. he ε nsenstve loss functon s: for f ( x d < ε Lε ( y = (4 f ( x d ε otherwse Usng ths loss functon the approxmaton errors smaller than ε are gnored, whle the larger ones are punshed n a lnear way. Our goal s to gve an y = f ( x functon, whch represents the dependence of the output y on the nput x. In the SVM approach frst the nput vectors are projected nto a hgher dmensonal feature space, usng a set of non-lnear functons N M ϕ ( x : R R. he dmensonalty (M of the new feature space s not defned, t follows from the method (t can even be nfnte. he functon s estmated by projectng the nput data to the hgher dmensonal feature space as follows: where M w j j = y = ϕ = j ( x w ϕ( x w = [ w w,..., ] and ϕ = [ ϕ ( x ϕ ( x,..., ϕ ( x ] w M, (5, M It s assumed that ϕ (x, therefore w represents a bas term b. he goal of approxmaton s to fnd the parameter vector w n the representaton of Eq. (5. In ths optmzaton problem we have constrants gven n Eq. (6 (t comes from the ε nsenstve loss functon L ( f x, y ε ( and a further constrant of w c to keep w as short as possble (c s a constant. o deal wth tranng ponts outsde the ε boundary, the { } ξ = and { ξ } = slack varables are ntroduced: d w ϕ w ϕ( x ( x d ξ, ξ, ε + ξ, ε + ξ, =,,..., (6 he slack varables are ntroduced to descrbe the penalty for the tranng ponts lyng outsde the ε boundary. he measure of ths cost s determned by the loss functon. hs s solved by mnmzng the followng objectve functon: F( w, ξ, ξ = w w + ρ ( ξ + ξ (7 = he frst term stands for the mnmzaton of w, whle the ρ constant s the trade off parameter between ths and the mnmzaton of tranng data errors. hs constraned optmzaton can be defned as a Lagrangan functon. hs s often called as the prmal problem:

4 ( w,ξ,ξ, α, α, γ, γ = ρ ( ξ + ξ J α = α = = + w [ w ϕ( x d + ε + ξ ] [ w ϕ( x d + ε + ξ ] ( γ ξ + γ ξ = w (8 We have to mnmze (8 accordng to w and b and maxmze accordng to the Lagrange multplers. he prmal problem deals wth convex cost functon and lnear constrants, therefore from ths constraned optmzaton problem a dual problem can be constructed, whch can be solved more easly. he soluton can be obtaned usng quadratc programmng (Q. o do ths the Karush Kuhn ucker (KK condtons are used (Vapnk, 995. he dual problem s: Q ( α, α = d ( α α ε ( = ( α α ( α α K( x,x α + α = (9 = j= j j j Wth constrants: ( α α = α, α ρ, =,..., ( = From the dual problem t can be seen that the soluton does not depend drectly on the ϕ nonlnear functon set. Instead a kernel functon s used whch s formed as: K( x, x j = ϕ ( x ϕ( x j he result of the dual problem s the sets of the Lagrange multplers α and α. Fnally the response of the SVM can be determned as the weghted sum of the values of the kernel functon. ( K( x, x y = α α = ( he nonzero multplers ( ( α α, =,, mark ther correspondng nput data ponts as support vectors. It can be seen that the response depends only on the support vectors and not on all tranng data. hs property s an nterestng and mportant feature of SVM solutons. It shows that the soluton s obtaned as a sparse approxmaton. For constructng an SVM nstead of choosng the non-lnear functons, the kernel functon should be selected. o be a kernel a functon must satsfy the Mercer condton (Vapnk, 995. he most popular kernel functons are the Gaussan functons. In ths case the SVM corresponds to an RBF network where the centre vectors of the Gaussan bass functons are the support vectors. So the structure of a support vector machne and an RBF may be smlar although ther constructons are qute dfferent. he man drawback of SVM s ts hgh computatonal burden because of the requred quadratc programmng. Recently a least square (LS verson of SVM was proposed by Suykens (. LS-SVM can also be appled for both classfcaton and regresson. LS-SVM apples quadratc cost functon and equalty constrants nstead of the nequalty ones gven n Eq. (6. he optmsaton problem of LS-SVM s formulated as follows: F( w, e = w w + γ e, ( = where d = w ϕ( x + b + e. (3 From these equatons one can construct the Lagrangan, whch leads to the followng overall soluton: r b r = (4 + Ω C I α d [ ] [,...,] = [ ] Here d = d, d,..., d, α = α, α,.., α, r = and Ω K x, x., j ( j he response of the "network" can be obtaned as f ( x = α, = K( x x b + (5 It can be seen that the response for a gven nput can be obtaned smlarly to (, however, here nstead of quadratc programmng only a matrx nverson s requred to determne the Lagrange multplers. It can also be seen from ( that the real problem s to fnd such a weght vector that mnmzes the cost functon. An mportant dfference between Vapnk's SVM and LS-SVM s that the latter soluton s not sparse; all tranng ponts are used for gettng the soluton. o get sparse soluton, however, many dfferent approaches were proposed for example (Suykens et al.,, (Valyon and Horváth,. It can also be recognsed that LS-SVM and the method of rdge regresson (Saunders et al., 998 are almost equvalent, as the functon to be mnmzed n rdge regresson s: F( w, e = w w + γ e (6 = wth the constrant of d = w ϕ( x + e (7 hs means that the approxmaton error wll be: e = d w ϕ( x (8 Agan the Lagrangan can be constructed whch leads to the soluton: and α = Ω + I d (9 γ ( = ( x x f x = α K, ( he only dfference between LS-SVM and rdge regresson s that n rdge regresson bas s not used. Further t can be shown easly that the soluton of rdge regresson and a regularzed LS soluton are equvalent. he classcal LS soluton can be obtaned f (8 s used n (6. In ths case the optmal weght vector s obtaned from the mnmzaton of the regularzed LS problem.

5 w = Ω + I d ϕ( x γ ( = where Ω = K( x, x as before., j j [ ( ( ( ] he basc dfference between the two approaches s that n the LS soluton the weght vector w, whch s used n the M-dmensonal feature space, s obtaned drectly. Usng the Lagrange method the drect results are the Lagrange multplers. he weght vector w can be obtaned ndrectly from α. An mportant feature s that whle the dmenson of the feature space may be arbtrarly large, even nfnte, the dmenson of α s, whch s the number of the tranng samples. A more mportant feature s that usng the Lagrange approach the soluton ( s expressed as a lnear combnaton of dfferent poston kernel functons, where the centre parameters of the kernel functons are determned by the tranng ponts. Usng the kernel representaton to get the response of the network we do not need to determne the representaton n the feature space. he two solutons can be called prmal and dual solutons. rmal soluton can be preferred when the dmenson of the feature space s not too hgh and where there are no dffcultes wth the mplementaton of the ϕ = ϕ x, ϕ x,..., ϕm x non-lnear functons. On the other hand rdge regresson or LS- SVM approach should be preferred f the dmenson of the feature space s huge or f the sparse representaton has specal advantages. 4. CMAC AS A SUOR VECOR MACHINE he two dfferent approaches dscussed prevously can be appled n the case of the CMAC. he classcal bnary CMAC uses rectangular bass functons (frst-order B-splnes wth fxed postons, whch map the nput data nto the feature space. he assocaton vector corresponds to the feature space representaton. In one-dmensonal case the length of the assocaton vector s rather lmted, so CMAC can be mplemented effcently n the feature space. A one-dmensonal bnary CMAC can also be nterpreted as an LS-SVM (or more exactly as a rdge regresson soluton, because the classcal CMAC does not use bas term, where the kernel functon s obtaned from the fxed-poston frstorder B-splnes: the kernel functons wll be secondorder B-splnes where the centre parameters are the nput data ponts. If the support of the rectangular functons s C measured n quantums of the nput data, the support of a kernel functon s C. In onedmensonal case the kernel representaton of the CMAC has no specal advantages. However, n a multdmensonal case usng the prmal representaton we have to reduce the number of rectangular bass functons, the dmenson of the feature space. Wthout ths reducton the complexty of the multdmensonal CMAC would ncrease exponentally wth the nput dmenson. he reducton of the dmenson of the feature space n a classcal N-dmensonal CMAC s acheved by usng only C overlays of rectangular bass functons nstead of C N ones (and we apply Hash codng as t was dscussed n Secton. But ths reducton s the real reason of the lmted modellng capablty of the multdmensonal CMAC. If we used C N overlays we would get a multdmensonal CMAC wth the same modellng capablty as a one-dmensonal network. In the prmal space a CMAC wth C N overlays cannot be mplemented because of ts hgh complexty. However, n the dual space we use the kernel functon, and the complexty wll not ncrease even f the feature (prmal space s a very-large dmensonal one. he second-order B-splne kernel functon n twodmensonal case s shown n Fgure 3a. hs s the dscretzed verson of the contnuous second-order B- splne (Fgure 3b. accordng to the quantzed nput space of the CMAC a. b. Fg. 3. Second-order B-splne kernel functons for CMAC wth C=4. he new nterpretatons have some mportant consequences. Frst of all there wll be no sgnfcant dfference between the one-dmensonal and the multdmensonal cases. he modellng capablty of the multdmensonal verson wll be the same as that of the one-dmensonal CMAC wthout gettng so complex network archtecture that cannot be mplemented. In multdmensonal cases the SVM (rdge regresson representaton wll not be equvalent to the Albus CMAC, as t corresponds to a CMAC wth exponentally ncreasng number of overlays. Fgure 4. shows ths dfference n a smple example. he approxmaton error n the tranng ponts of the D snc functon s shown for the Albus CMAC n Fgure 4a. and for the kernel-based verson n Fgure 4b. It can be seen that n the latter case all tranng ponts can be represented exactly, whle the orgnal verson are not able to learn exactly all tranng data. a. Fg. 4. he tranng error of the D CMAC a. and the kernel network b. for the D snc functon. he kernel representaton and the rdge regresson verson can be constructed not only for bnary CMAC but for hgher-order networks too. In hgherorder CMACs nstead of the rectangular bass b

6 functons hgher-order B-splnes are used. Usng a k th -order B-splne n the orgnal CMAC (n the prmal space, the correspondng SVM versons wll use (k+ th order B-splnes as SVM kernels. he mproved modellng capablty has a prce. he kernel representaton needs hgher-order functons even n the case that corresponds to the bnary CMAC. herefore the network cannot be mplemented wthout usng multplers. hs drawback, however, at least partly can be compensated usng some effectve multpler archtecture (see e.g. Szabó et al., and an effcent constructon where all advantageous of ths multpler structure can be utlsed. 5. CONCLUSION In summary we can conclude that two nterpretatons of the CMAC networks can be used. he frst the orgnal one apples rectangular bass functons, where the bass functons mplement a mappng from nput space nto a "feature" space, and the soluton s obtaned as a lnear mappng from ths feature space nto the output. he SVM approach apples pecewse lnear B-splne bass functons as kernel functons and the soluton s obtaned as a mappng from the kernel space nto the output space. he frst verson s better (the network s rather smple, ts tranng s fast, etc n one-dmensonal case, but the second one can be preferred n multdmensonal cases as - although t may need more complex tranng algorthm - t has better modellng capablty. REFERENCES Albus, J.S. (975 "A New Approach to Manpulator Control: he Cerebellar Model Artculaton Controller (CMAC", ransacton of the ASME, Sep pp. -7. Brown, M. - Harrs, C.J. - arks, (993. "he Interpolaton Capablty of the Bnary CMAC", Neural Networks, Vol. 6, pp Brown, M. and Harrs, C.J. (994 "Neurofuzzy Adaptve Modelng and Control" rentce Hall, New York, 994. Chang Chng-san and Ln Chun-Sn (996: CMAC wth General Bass Functons. Neural Networks, Vol. 9, pp Commur S., Lews F.L. and Jagannathan S. (995: Dscrete-tme CMAC Neural Networks for Control Applcatons. roc. of the 34 th Conference on Decson & Control, New Orleans, LA, Vol.. pp Ellson, D. (99 "On the Convergence of the Multdmensonal Albus erceptron", he Internatonal Journal of Robotcs Research, Vol., pp Horváth, G. and Deák, F. (993 "Hardware Implementaton of Neural Networks Usng FGA Elements" roc. of he Internatonal Conference on Sgnal rocessng Applcaton and echnology, Santa Clara Vol. II, pp Ker, J.S. - Kuo, Y.H. - Lu, B.D. (995 "Hardware Realzaton of Hgher-order CMAC Model for Color Calbraton", roceedngs, of the IEEE Internatonal Conference on Neural Networks, erth, Vol. 4, pp Lane, S.H. - Handelman, D.A. and Gelfand, J.J (99 "heory and Development of Hgher- Order CMAC Neural Networks", IEEE Control Systems, Apr. 99. pp Mller,.W. III. Glanz, F.H. and Kraft, L.G. (99 "CMAC: An Assocatve Neural Network Alternatve to Backpropagaton" roceedngs of the IEEE, Vol. 78, pp Mller, W.. - Box, B.A. and Whtney E.C. (99 "Desgn and Implementaton of a Hgh Speed CMAC Neural Network Usng rogrammable CMOS Logc Cell Arrays", ANIS 3, pp Saunders, C.- Gammerman, A. and Vovk, V. (998, Rdge Regresson Learnng Algorthm n Dual Varables. Machne Learnng, roc of the Ffteenth Internatonal Conference on Machne Learnng, pp Smola, A.J. and Schölkopf, B. (998 A utoral on Support Vector Regresson, NeuroCOL echncal Report Seres NC R 998 3, Oct., 998. Suykens, J.A.K., Lukas, L. and Vandewalle J. ( Sparse approxmaton usng least squares support vector machnes roc. of the IEEE Internatonal Symposum on Crcuts and Systems ISCAS'. Vol. II, pp Suykens, J.A.K. ( Nonlnear Modelng and Support Vector Machnes, IEEE Instrumentaton and Measurement echnology Conference, Budapest, Vol. I, pp Szabó,. and Horváth, G. (999 CMAC and ts Extensons for Effcent System Modellng" Int. Journal of Appled Mathematcs and Computatons, Vol. 9, pp Szabó,. and Horváth, G. ( "CMAC Neural Network wth Improved Generalzaton Capablty for System Modellng" roc. of the IEEE Conference on Instrumentaton and Measurement, Anchorage, AK. Vol. II, pp Szabó,., Anton, L., Horváth, G. and Fehér, B. ( An effcent mplementaton for a matrx-vector multpler structure, n roc. of IEEE Internatonal Jont Conference on Neural Networks, IJCNN, Vol. II, pp hompson D.E. and Kwon S. (995: Neghbourhood Sequental and Random ranng echnques for CMAC. IEEE rans. on Neural Networks, Vol. 6, pp Valyon, J. and Horváth, G. ( "Reducng the Complexty and Network Sze of LS SVM Solutons" submtted to IEEE rans. on Neural Networks. Vapnk, V. (995 "he Nature of Statstcal Learnng heory", Sprnger, New York.

7

8

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION

TOPICS MULTIPLIERLESS FILTER DESIGN ELEMENTARY SCHOOL ALGORITHM MULTIPLICATION 1 2 MULTIPLIERLESS FILTER DESIGN Realzaton of flters wthout full-fledged multplers Some sldes based on support materal by W. Wolf for hs book Modern VLSI Desgn, 3 rd edton. Partly based on followng papers:

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Regression Using Support Vector Machines: Basic Foundations

Regression Using Support Vector Machines: Basic Foundations Regresson Usng Support Vector Machnes: Basc Foundatons Techncal Report December 004 Aly Farag and Refaat M Mohamed Computer Vson and Image Processng Laboratory Electrcal and Computer Engneerng Department

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Advanced Introduction to Machine Learning

Advanced Introduction to Machine Learning Advanced Introducton to Machne Learnng 10715, Fall 2014 The Kernel Trck, Reproducng Kernel Hlbert Space, and the Representer Theorem Erc Xng Lecture 6, September 24, 2014 Readng: Erc Xng @ CMU, 2014 1

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or

Entropy Coding. A complete entropy codec, which is an encoder/decoder. pair, consists of the process of encoding or Sgnal Compresson Sgnal Compresson Entropy Codng Entropy codng s also known as zero-error codng, data compresson or lossless compresson. Entropy codng s wdely used n vrtually all popular nternatonal multmeda

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function

A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry

Workshop: Approximating energies and wave functions Quantum aspects of physical chemistry Workshop: Approxmatng energes and wave functons Quantum aspects of physcal chemstry http://quantum.bu.edu/pltl/6/6.pdf Last updated Thursday, November 7, 25 7:9:5-5: Copyrght 25 Dan Dll (dan@bu.edu) Department

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Determination of Compressive Strength of Concrete by Statistical Learning Algorithms

Determination of Compressive Strength of Concrete by Statistical Learning Algorithms Artcle Determnaton of Compressve Strength of Concrete by Statstcal Learnng Algorthms Pjush Samu Centre for Dsaster Mtgaton and Management, VI Unversty, Vellore, Inda E-mal: pjush.phd@gmal.com Abstract.

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

MULTICLASS LEAST SQUARES AUTO-CORRELATION WAVELET SUPPORT VECTOR MACHINES. Yongzhong Xing, Xiaobei Wu and Zhiliang Xu

MULTICLASS LEAST SQUARES AUTO-CORRELATION WAVELET SUPPORT VECTOR MACHINES. Yongzhong Xing, Xiaobei Wu and Zhiliang Xu ICIC Express Letters ICIC Internatonal c 2008 ISSN 1881-803 Volume 2, Number 4, December 2008 pp. 345 350 MULTICLASS LEAST SQUARES AUTO-CORRELATION WAVELET SUPPORT VECTOR MACHINES Yongzhong ng, aobe Wu

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG 6th Internatonal Conference on Mechatroncs, Materals, Botechnology and Envronment (ICMMBE 6) De-nosng Method Based on Kernel Adaptve Flterng for elemetry Vbraton Sgnal of the Vehcle est Kejun ZEG PLA 955

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

17 Support Vector Machines

17 Support Vector Machines 17 We now dscuss an nfluental and effectve classfcaton algorthm called (SVMs). In addton to ther successes n many classfcaton problems, SVMs are responsble for ntroducng and/or popularzng several mportant

More information

Cubic Trigonometric B-Spline Applied to Linear Two-Point Boundary Value Problems of Order Two

Cubic Trigonometric B-Spline Applied to Linear Two-Point Boundary Value Problems of Order Two World Academy of Scence Engneerng and echnology Internatonal Journal of Mathematcal and omputatonal Scences Vol: No:0 00 ubc rgonometrc B-Splne Appled to Lnear wo-pont Boundary Value Problems of Order

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lecture 14: Bandits with Budget Constraints

Lecture 14: Bandits with Budget Constraints IEOR 8100-001: Learnng and Optmzaton for Sequental Decson Makng 03/07/16 Lecture 14: andts wth udget Constrants Instructor: Shpra Agrawal Scrbed by: Zhpeng Lu 1 Problem defnton In the regular Mult-armed

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information