On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel

Size: px
Start display at page:

Download "On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel"

Transcription

1 Proceedngs of th European Symposum on Artfca Neura Networks, pp , ESANN 2003, Bruges, Begum, 2003 On the Equaty of Kerne AdaTron and Sequenta Mnma Optmzaton n Cassfcaton and Regresson Tasks and Ake Agorthms for Kerne Machnes Vosav Kecman, Mchae Vogt 2, Te Mng Huang Schoo of Engneerng, The Unversty of Auckand, Auckand, New Zeaand 2 Insttute of Automatc Contro, TU Darmstadt, Darmstadt,, Germany e-ma: v.kecman@auckand.ac.nz, mvogt@at.tu-darmstadt.de Abstract: The paper presents the equaty of a kerne AdaTron (KA) method (orgnatng from a gradent ascent earnng approach) and sequenta mnma optmzaton (SMO) earnng agorthm (based on an anaytc quadratc programmng step) n desgnng the support vector machnes (SVMs) havng postve defnte kernes. The condtons of the equaty of two methods are estabshed. The equaty s vad for both the nonnear cassfcaton and the nonnear regresson tasks, and t sheds a new ght to these seemngy dfferent earnng approaches. The paper aso ntroduces other earnng technques reated to the two mentoned approaches, such as the nonnegatve conugate gradent, cassc Gauss-Sede (GS) coordnate ascent procedure and ts dervatve known as the successve over-reaxaton (SOR) agorthm as a vabe and usuay faster tranng agorthms for performng nonnear cassfcaton and regresson tasks. The convergence theorem for these reated teratve agorthms s proven.. Introducton One of the manstream research feds n earnng from emprca data by support vector machnes, and sovng both the cassfcaton and the regresson probems, s an mpementaton of the ncrementa earnng schemes when the tranng data set s huge. Among severa canddates that avod the use of standard quadratc programmng (QP) sovers, the two earnng approaches whch have recenty got the attenton are the KA (Anauf, Beh, 989; Freß, Crstann, Campbe, 998; Veropouos, 200) and the SMO (Patt, 998, 999; Vogt, 2002). Due to ts anaytca foundaton the SMO approach s partcuary popuar and at the moment the wdest used, anayzed and st heavy deveopng agorthm. At the same tme, the KA athough provdng smar resuts n sovng cassfcaton probems (n terms of both the accuracy and the tranng computaton tme requred) dd not attract that many devotees. There are two basc reasons for that. Frst, unt recenty (Veropouos, 200), the KA seemed to be restrcted to the cassfcaton probems ony and second, t 'acked' the feur of the strong theory (despte ts beautfu 'smpcty' and strong convergence proofs). The KA s based on a gradent ascent technque and ths fact mght have aso dstracted some researchers beng aware of probems wth gradent ascent approaches faced wth possby -condtoned kerne matrx. Here we show when and why the recenty deveoped agorthms for SMO usng postve defnte kernes or modes

2 wthout a bas term, (Vogt, 2002), and the KA for both cassfcaton (Fress, Crstann, Campbe, 998) and regresson (Veropouos, 200) are dentca. Both the KA and the SMO agorthm attempt to sove the foowng QP probem n the case of cassfcaton (Vapnk, 995; Cherkassky and Muer, 998; Crstann and Shawe- Tayor, 2000; Kecman, 200; Schökopf and Smoa, 2002) - maxmze the dua Lagrangan L d (α) = α (, ) y y α α K x x, () 2 =, = subect to α 0, =,, and α y = 0. (2) where s the number of tranng data pars, α are the dua Lagrange varabes, y are the cass abes (±), and the K( x, x ) are the kerne functon vaues. Because of nose or generc cass features, there w be an overappng of tranng data ponts. Nothng, but constrants, n sovng () changes and they are 0 α C, =,..., and = α y = 0, (3) where 0 < C <, s a penaty parameter tradng off the sze of a margn wth a number of mscassfcatons. In the case of the nonnear regresson the earnng probem s the maxmzaton of a dua Lagrangan beow L d (α,α )= ε ( α + α ) + ( ) ( )( ) (, ) α α y α α α α K x x,(4) 2 = =, = = = = s.t. α α, (4a) 0 α C, 0 α C, =,...,. (4b) where ε s a prescrbed sze of the nsenstvty zone, and α and α ( =,..., ) are Lagrange mutpers for the ponts above and beow the regresson functon respectvey. Learnng resuts n Lagrange mutper pars (α, α ). Because no tranng data can be on both sdes of the tube, ether α or α w be nonzero,.e., α α = 0. = 2. The KA and SMO earnng agorthms wthout-bas-term It s known that postve defnte kernes (such as the most popuar and the most wdey used RBF Gaussan kernes as we as the compete poynoma ones) do not requre bas term (Evgenou, Pont, Poggo, 2000). Beow, the KA and the SMO agorthms w be presented for such a fxed (.e., no-) bas desgn probem and compared for the cassfcaton and regresson cases. The equaty of two earnng schemes and resutng modes w be estabshed. Orgnay, n (Patt, 998, 999), the SMO cassfcaton agorthm was deveoped for sovng the probem () ncudng the constrants reated to the bas b. In these eary pubcatons the case when bas b s fxed varabe was aso mentoned but the detaed anayss of a fxed bas update was not accompshed.

3 2. Incrementa Learnng n Cassfcaton a) Kerne AdaTron n cassfcaton The cassc AdaTron agorthm as gven n (Anauf and Beh, 989) s deveoped for near cassfer. The KA s a varant of the cassc AdaTron agorthm n the feature space of SVMs (Freß et a., 998). The KA agorthm soves the maxmzaton of the dua Lagrangan () by mpementng the gradent ascent agorthm. The update of the dua varabes α s gven as α ( y ) y K (, ) ( y f ) = α = η = η α x x = η, (5a) where f s the vaue of the decson functon f at the pont x,.e., f = α y K ( x, x ), and y denotes the vaue of the desred target (or the cass' = abe) whch s ether + or -. The update of the dua varabes α s gven as α mn(max(0, α + α ), ) ( =,..., ) (5b) C In other words, the dua varabes α are cpped to zero f ( α + α ) < 0. In the case of the soft nonnear cassfer (C < ) α are cpped between zero and C, (0 α C). The agorthm converges from any nta settng for the Lagrange mutpers α. b) SMO wthout-bas-term n cassfcaton Recenty (Vogt, 2002) derved the update rue for mutpers α that ncudes a detaed anayss of the Karush-Kuhn-Tucker (KKT) condtons for checkng the optmaty of the souton. (As referred above, a fxed bas update was mentoned n Patt's papers). The foowng update rue for α for a no-bas SMO agorthm was proposed y E y f y f α = = = K ( x, x ) K ( x, x ) K ( x, x ), (6) where E = f y denotes the dfference between the vaue of the decson functon f at the pont x and the desred target (abe) y. Note the equaty of (5a) and (6) when the earnng rate n (5a) s chosen to be η = / K ( x, x ). The mportant part of the SMO agorthm s to check the KKT condtons wth precson τ (e.g., τ = 0-3 ) n each step. An update s performed ony f α < C y E < τ, or (6a) α > 0 y E > τ After an update, the same cppng operaton as n (5b) s performed α mn(max(0, α + α ), C) ( =,..., ) (6b) It s the nonnear cppng operaton n (5b) and n (6b) that strcty equas the KA and the SMO wthout-bas-term agorthm n sovng nonnear cassfcaton probems. Ths fact sheds new ght on both agorthms. Ths equaty s not that obvous n the case of a 'cassc' SMO agorthm wth bas term due to the heurstcs nvoved n the seecton of actve ponts whch shoud ensure the argest ncrease of the dua Lagrangan L d durng the teratve optmzaton steps.

4 2.2 Incrementa Learnng n Regresson Smary to the case of cassfcaton, there s a strct equaty between the KA and the SMO agorthm when postve defnte kernes are used for nonnear regresson. a) Kerne AdaTron n regresson The frst extenson of the Kerne AdaTron agorthm for regresson s presented n (Veropouos, 200) as the foowng gradent ascent update rues for α and α α ( ( ) (, ) = η = η y ε α ) ( ) ( ) α K x x = η y ε f = η E + ε, (7a) = ( y K ) ( y f ) ( E ) L α = η = η ε + ( α α ) (, ) η ε η ε x x = + =, (7b) d = α where y s the measured vaue for the nput x, ε s the prescrbed nsenstvty zone, and E = f y stands for the dfference between the regresson functon f at the pont x and the desred target vaue y at ths pont. The cacuaton of the gradent above does not take nto account the geometrc reaty that no tranng data can be on both sdes of the tube. In other words, t does not use the fact that ether α or α or both w be nonzero..e., that α α = 0 must be fufed n each teraton step. Beow we derve the gradents of the dua Lagrangan L d accountng for geometry. Ths new formuaton of the KA agorthm strcty equas the SMO method and t s gven as = K( x, x ) α ( α ) (, ) (, ) (, ), α K x x + y ε + K x x α K x x α = (8a) = K( x, x ) α ( α α ) K( x, x ) ( α α ) K( x, x ) + y ε =, ( x x ) = K( x, x ) α + y ε f = K(, ) α + E + ε For the α mutpers, the vaue of the gradent s L = K( x, ) d x α + E ε. (8b) The update vaue for α s now α = η = η ( K( x, x ) α + E + ε ), (9a) α α + α = α + η = α η ( K( x, x ) α + E + ε ) (9b) For the earnng rate η = / K( x, x ) the gradent ascent earnng KA s defned as, E + ε α α α K( x, x ) Smary, the update rue for α s E ε α α α + K( x, x ) (0a) (0b) Same as n the cassfcaton, α and α are cpped between zero and C, α mn(max(0, α ), ) ( =,..., ), (a) C α C α mn(max(0, ), ) ( =,..., ). (b)

5 b) SMO wthout-bas-term n regresson The frst agorthm for the SMO wthout-bas-term n regresson (together wth a detaed anayss of the KKT condtons for checkng the optmaty of the souton) s derved n (Vogt, 2002). The foowng earnng rues for the Lagrange mutpers α and α updates were proposed E + ε α α α, (2a) K( x, x ) E ε α α α +. (2b) K( x, x ) The equaty of equatons (0a, b) and (2a, b) s obvous when the earnng rate, as presented above n (0a, b), s chosen to be η = / K( x, x ). Thus, n both the cassfcaton and the regresson, the optma earnng rate s not necessary equa for a tranng data pars. For a Gaussan kerne, η = s same for a data ponts, and for a compete n th order poynoma each data pont has dfferent earnng rate T n η = /( x x + ). Smar to cassfcaton, a ont update of α and α s performed ony f the KKT condtons are voated by at east τ,.e. f α < C ε + E < τ, or α > 0 ε + E > τ, or α < C ε E < τ, or α > 0 ε E > τ After the changes, the same cppng operatons as defned n () are performed α mn(max(0, α ), ) ( =,..., ), (4a) C α C (3) α mn(max(0, ), ) ( =,..., ). (4b) The KA earnng as formuated n ths paper and the SMO agorthm wthout-basterm for sovng regresson tasks are strcty equa n terms of both the number of teratons requred and the fna vaues of the Lagrange mutpers. The equaty s strct despte the fact that the mpementaton s sghty dfferent. In every teraton step, namey, the KA agorthm updates both weghts α and α wthout any checkng whether the KKT condtons are fufed or not, whe the SMO performs an update accordng to equatons (3). 3. The Coordnate Ascent Based Learnng for Nonnear Cassfcaton and Regresson Tasks When postve defnte kernes are used, the earnng probem for both tasks s same. In a vector-matrx notaton, n a dua space, the earnng s represented as: maxmze L ( α ) = 0.5 α T K α + f T α (5) d s.t. 0 <= α <= C, ( =,..., n), (6)

6 where, n the cassfcaton n = and the matrx K s an (, ) symmetrc postve defnte matrx, whe n regresson n = 2 and K s a (2, 2) symmetrc sempostve defnte one. Note that the constrants (6) defne a convex subspace over whch the convex dua Lagrangan shoud be maxmzed. It s very we known that the vector α may be ooked at as the souton of a system of near equatons K α = f (7) subect to the same constrants as gven by (6). Thus, t may seem natura to sove (7), subect to (6), by appyng some of the we known and estabshed technques for sovng a genera near system of equatons. The sze of tranng data set and the constrants (6) emnate drect technques. Hence, one has to resort to the teratve approaches n sovng the probems above. There are three possbe teratve avenues that can be foowed. They are; the use of the Non-Negatve Least Squares (NNLS) technque (Lawson and Hanson, 974), appcaton of the Non-Negatve Conugate Gradent (NNCG) method (Hestenes, 980) and the mpementaton of Gauss-Sede (GS).e., the reated Successve Over-Reaxaton technque (SOR). The frst two methods sove for the non-negatve constrants ony. Thus, they are not sutabe n sovng 'soft' tasks, when penaty parameter C < s used,.e., when there s an upper bound on maxma vaue of α. Nevertheess, n the case of nonnear regresson, one can appy NNLS and NNCG by takng C = and compensatng (.e. smoothng or 'softenng' the souton) by ncreasng the senstvty zone ε. However, the two methods (namey NNLS and NNCG) are not sutabe for sovng soft margn (C < ) cassfcaton probems n ther present form, because there s no other parameter that can be used n 'softenng' the margn. Here we show how to extend the appcaton of GS and SOR to both the nonnear cassfcaton and to the nonnear regresson tasks. The Gauss-Sede method soves (7) by usng the th equaton to update the th unknown dong t teratvey,.e., startng n the k th step wth the frst equaton to compute the second equaton s used to cacuate the k α + 2 by usng new so on. The teratve earnng takes the foowng form, k α + k α +, then the k and α ( > 2) and L α α α α α α α n n k + k + k k k + k k d = / f K K K = K + K f = + = = + K = = K k + where we use the fact that the term wthn a second bracket (caed the resdua r n mathematcs' references) s the th eement of the gradent of a dua Lagrangan L d gven n (5) at the k+ th teraton step. The equaton (8) above shows that GS method s a coordnate gradent ascent procedure as we as the KA and the SMO are. The KA and SMO for postve defnte kernes equa the GS! Note that the optma earnng rate used n both the KA agorthm and n the SMO wthout-bas-term approach s exacty equa to the coeffcent /K n a GS method. Based on ths equaty, the convergence theorem for the KA, SMO and GS (.e., SOR) n sovng (5) subect to constrants (6) can be stated and proved as foows: (8)

7 Theorem: For SVMs wth postve defnte kernes, the teratve earnng agorthms KA.e., SMO.e., GS.e., SOR, n sovng nonnear cassfcaton and regresson tasks (5) subect to constrants (6), converge startng from any nta choce of α 0. Proof: The proof s based on the very we known theorem of convergence of the GS method for symmetrc postve defnte matrces n sovng (7) wthout constrants (Ostrowsk, 966). Frst note that for postve defnte kernes, the matrx K created by terms y y K ( x, x ) n the second sum n (), and nvoved n sovng cassfcaton probem, s aso postve defnte. In regresson tasks K s a symmetrc postve semdefnte (meanng st convex) matrx, whch after a md reguarzaton gven as (K K + λi, λ ~ e-2) becomes postve defnte one. (Note that the proof n the case of regresson does not need reguarzaton at a, but there s no space here to go nto these detas). Hence, the earnng wthout constrants (6) converges, startng from any nta pont α 0, and each pont n an n-dmensona search space for mutpers α s a vabe startng pont ensurng a convergence of the agorthm to the maxmum of a dua Lagrangan L d. Ths, naturay, ncudes a the (startng) ponts wthn, or on a boundary of, any convex subspace of a search space ensurng the convergence of the agorthm to the maxmum of a dua Lagrangan L d over the gven subspace. The constrants mposed by (6) preventng varabes α to be negatve or bgger than C, and mpemented by the cppng operators above, defne such a convex subspace. Thus, each 'cpped' mutper vaue α defnes a new startng pont of the agorthm guaranteeng the convergence to the maxmum of L d over the subspace defned by (6). For a convex constranng subspace such a constraned maxmum s unque. Q.E.D. Due to the ack of the space we do not go nto the dscusson on the convergence rate here and we eave t to some other occason. It shoud be ony mentoned that both KA and SMO (.e. GS and SOR) for postve defnte kernes have been successfuy apped for many probems (see references gven here, as we as many other, benchmarkng the mentoned methods on varous data sets). Fnay, et us ust menton that the standard extenson of the GS method s the method of successve over-reaxaton that can reduce the number of teratons requred by proper choce of reaxaton parameter ω sgnfcanty. The SOR method uses the foowng updatng rue α α ω α α α ω L n k + k k + k k d = K + K f = + K = = K k + and smary to the KA, SMO, and GS ts convergence s guaranteed. (9) 4. Concusons Both the KA and the SMO agorthms were recenty deveoped and ntroduced as aternatves to sovng quadratc programmng probem whe tranng support vector machnes on huge data sets. It was shown that when usng postve defnte kernes the two agorthms are dentca n ther anaytc form and numerca mpementaton. In addton, for postve defnte kernes both agorthms are strcty dentca wth a cas-

8 sc teratve GS (optma coordnate ascent) earnng and ts extenson SOR. T now, these facts were burred many due to dfferent pace n posng the earnng probems and due to the 'heavy' heurstcs nvoved n an SMO mpementaton that shadowed an nsght nto the possbe dentty of the methods. It s shown that n the so-caed no-bas SVMs, both the KA and the SMO procedure are the coordnate ascent based methods. Fnay, due to the many ways how a the three agorthms (KA, SMO and GS.e., SOR) can be mpemented there may be some dfferences n ther overa behavour. The ntroducton of the reaxaton parameter 0 < ω < 2 w speed up the agorthm. The exact optma vaue ω opt s probem dependent. Acknowedgment: The resuts presented are ntated durng the stay of the frst author at the Prof. Rof Isermann's Insttute and sponsored by the Deutsche Forschungsgemenschaft (DFG). He s thankfu to both Prof. Rof Isermann and DFG for a the support durng ths stay. 5. References. Anauf, J. K., Beh, M., The AdaTron - an adaptve perceptron agorthm. Europhyscs Letters, 0(7), pp , Cherkassky, V., Muer, F., Learnng From Data: Concepts, Theory and Methods, John Wey & Sons, New York, NY, Crstann, N., Shawe-Tayor, J., An ntroducton to Support Vector Machnes and other kerne-based earnng methods, Cambrdge Unversty Press, Cambrdge, UK, Evgenou, T., Pont, M., Poggo, T., Reguarzaton networks and support vector machnes, Advances n Computatona Mathematcs, 3, pp.-50, Freß, T.-T., Crstann, N., Campbe, I. C. G., The Kerne-Adatron: a Fast and Smpe Learnng Procedure for Support Vector Machnes. In Shavk, J., edtor, Proceedngs of the 5th Internatona Conference on Machne Learnng, Morgan Kaufmann, pp , San Francsco, CA, Kecman V., Learnng and Soft Computng, Support Vector Machnes, Neura Networks, and Fuzzy Logc Modes, The MIT Press, Cambrdge, MA, ( Lawson, C. I., Hanson, R. J., Sovng Least Squares Probems, Prentce-Ha, Engewood Cffs, N.J., Ostrowsk, A.M., Soutons of Equatons and Systems of Equatons, 2 nd ed., Academc Press, New York, Patt, J. C., Sequenta mnma optmzaton: A fast agorthm for tranng support vector machnes. TR MSR-TR-98-4, Mcrosoft Research, Patt, J.C., Fast Tranng of Support Vector Machnes usng Sequenta Mnma Optmzaton. Ch. 2 n Advances n Kerne Methods Support Vector Learnng, edted by B. Schökopf, C. Burges, A. Smoa, The MIT Press, Cambrdge, MA, 999. Schökopf B., Smoa, A., Learnng wth Kernes Support Vector Machnes, Optmzaton, and Beyond, The MIT Press, Cambrdge, MA, Veropouos, K., Machne Learnng Approaches to Medca Decson Makng, PhD Thess, The Unversty of Brsto, Brsto, UK, Vapnk, V.N., The Nature of Statstca Learnng Theory, Sprnger Verag Inc, New York, NY, Vogt, M., SMO Agorthms for Support Vector Machnes wthout Bas, Insttute Report, Insttute of Automatc Contro, TU Darmstadt, Darmstadt, Germany, ( 2002

Bias Term b in SVMs Again

Bias Term b in SVMs Again Proceedngs of 2 th Euroean Symosum on Artfca Neura Networks,. 44-448, ESANN 2004, Bruges, Begum, 2004 Bas Term b n SVMs Agan Te Mng Huang, Vosav Kecman Schoo of Engneerng, The Unversty of Auckand, Auckand,

More information

The University of Auckland, School of Engineering SCHOOL OF ENGINEERING REPORT 616 SUPPORT VECTOR MACHINES BASICS. written by.

The University of Auckland, School of Engineering SCHOOL OF ENGINEERING REPORT 616 SUPPORT VECTOR MACHINES BASICS. written by. The Unversty of Auckand, Schoo of Engneerng SCHOOL OF ENGINEERING REPORT 66 SUPPORT VECTOR MACHINES BASICS wrtten by Vojsav Kecman Schoo of Engneerng The Unversty of Auckand Apr, 004 Vojsav Kecman Copyrght,

More information

Application of support vector machine in health monitoring of plate structures

Application of support vector machine in health monitoring of plate structures Appcaton of support vector machne n heath montorng of pate structures *Satsh Satpa 1), Yogesh Khandare ), Sauvk Banerjee 3) and Anrban Guha 4) 1), ), 4) Department of Mechanca Engneerng, Indan Insttute

More information

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students.

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students. Exampe: Suppose we want to bud a cassfer that recognzes WebPages of graduate students. How can we fnd tranng data? We can browse the web and coect a sampe of WebPages of graduate students of varous unverstes.

More information

Neural network-based athletics performance prediction optimization model applied research

Neural network-based athletics performance prediction optimization model applied research Avaabe onne www.jocpr.com Journa of Chemca and Pharmaceutca Research, 04, 6(6):8-5 Research Artce ISSN : 0975-784 CODEN(USA) : JCPRC5 Neura networ-based athetcs performance predcton optmzaton mode apped

More information

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06 Dervng the Dua Prof. Bennett Math of Data Scence /3/06 Outne Ntty Grtty for SVM Revew Rdge Regresson LS-SVM=KRR Dua Dervaton Bas Issue Summary Ntty Grtty Need Dua of w, b, z w 2 2 mn st. ( x w ) = C z

More information

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory Advanced Scence and Technoogy Letters Vo.83 (ISA 205), pp.60-65 http://dx.do.org/0.4257/ast.205.83.2 Research on Compex etworks Contro Based on Fuzzy Integra Sdng Theory Dongsheng Yang, Bngqng L, 2, He

More information

MARKOV CHAIN AND HIDDEN MARKOV MODEL

MARKOV CHAIN AND HIDDEN MARKOV MODEL MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not

More information

Image Classification Using EM And JE algorithms

Image Classification Using EM And JE algorithms Machne earnng project report Fa, 2 Xaojn Sh, jennfer@soe Image Cassfcaton Usng EM And JE agorthms Xaojn Sh Department of Computer Engneerng, Unversty of Caforna, Santa Cruz, CA, 9564 jennfer@soe.ucsc.edu

More information

A finite difference method for heat equation in the unbounded domain

A finite difference method for heat equation in the unbounded domain Internatona Conerence on Advanced ectronc Scence and Technoogy (AST 6) A nte derence method or heat equaton n the unbounded doman a Quan Zheng and Xn Zhao Coege o Scence North Chna nversty o Technoogy

More information

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES WAVELE-BASED IMAGE COMPRESSION USING SUPPOR VECOR MACHINE LEARNING AND ENCODING ECHNIQUES Rakb Ahmed Gppsand Schoo of Computng and Informaton echnoogy Monash Unversty, Gppsand Campus Austraa. Rakb.Ahmed@nfotech.monash.edu.au

More information

Sparse Training Procedure for Kernel Neuron *

Sparse Training Procedure for Kernel Neuron * Sparse ranng Procedure for Kerne Neuron * Janhua XU, Xuegong ZHANG and Yanda LI Schoo of Mathematca and Computer Scence, Nanng Norma Unversty, Nanng 0097, Jangsu Provnce, Chna xuanhua@ema.nnu.edu.cn Department

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy daptve and Iteratve Least Squares Support Vector Regresson Based on Quadratc Ren Entrop Jngqng Jang, Chu Song, Haan Zhao, Chunguo u,3 and Yanchun Lang Coege of Mathematcs and Computer Scence, Inner Mongoa

More information

Lower Bounding Procedures for the Single Allocation Hub Location Problem

Lower Bounding Procedures for the Single Allocation Hub Location Problem Lower Boundng Procedures for the Snge Aocaton Hub Locaton Probem Borzou Rostam 1,2 Chrstoph Buchhem 1,4 Fautät für Mathemat, TU Dortmund, Germany J. Faban Meer 1,3 Uwe Causen 1 Insttute of Transport Logstcs,

More information

Associative Memories

Associative Memories Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete

More information

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM Journa of heoretca and Apped Informaton echnoogy th February 3. Vo. 48 No. 5-3 JAI & LLS. A rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 NONLINEAR SYSEM IDENIFICAION BASE ON FW-LSSVM, XIANFANG

More information

COXREG. Estimation (1)

COXREG. Estimation (1) COXREG Cox (972) frst suggested the modes n whch factors reated to fetme have a mutpcatve effect on the hazard functon. These modes are caed proportona hazards (PH) modes. Under the proportona hazards

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Numerical integration in more dimensions part 2. Remo Minero

Numerical integration in more dimensions part 2. Remo Minero Numerca ntegraton n more dmensons part Remo Mnero Outne The roe of a mappng functon n mutdmensona ntegraton Gauss approach n more dmensons and quadrature rues Crtca anass of acceptabt of a gven quadrature

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages Appcaton of Partce Swarm Optmzaton to Economc Dspatch Probem: Advantages and Dsadvantages Kwang Y. Lee, Feow, IEEE, and Jong-Bae Par, Member, IEEE Abstract--Ths paper summarzes the state-of-art partce

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

On the Power Function of the Likelihood Ratio Test for MANOVA

On the Power Function of the Likelihood Ratio Test for MANOVA Journa of Mutvarate Anayss 8, 416 41 (00) do:10.1006/jmva.001.036 On the Power Functon of the Lkehood Rato Test for MANOVA Dua Kumar Bhaumk Unversty of South Aabama and Unversty of Inos at Chcago and Sanat

More information

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks Shengyang Sun, Changyou Chen, Lawrence Carn Suppementary Matera: Learnng Structured Weght Uncertanty n Bayesan Neura Networks Shengyang Sun Changyou Chen Lawrence Carn Tsnghua Unversty Duke Unversty Duke

More information

The Entire Solution Path for Support Vector Machine in Positive and Unlabeled Classification 1

The Entire Solution Path for Support Vector Machine in Positive and Unlabeled Classification 1 Abstract The Entre Souton Path for Support Vector Machne n Postve and Unabeed Cassfcaton 1 Yao Lmn, Tang Je, and L Juanz Department of Computer Scence, Tsnghua Unversty 1-308, FIT, Tsnghua Unversty, Bejng,

More information

Approximate Circle Packing in a Rectangular Container: Integer Programming Formulations and Valid Inequalities

Approximate Circle Packing in a Rectangular Container: Integer Programming Formulations and Valid Inequalities Appromate Crce Pacng n a Rectanguar Contaner: Integer Programmng Formuatons and Vad Inequates Igor Ltvnchev, Lus Infante, and Edth Lucero Ozuna Espnosa Department of Mechanca and Eectrca Engneerng Nuevo

More information

Support Vector Machine Technique for Wind Speed Prediction

Support Vector Machine Technique for Wind Speed Prediction Internatona Proceedngs of Chemca, Boogca and Envronmenta Engneerng, Vo. 93 (016) DOI: 10.7763/IPCBEE. 016. V93. Support Vector Machne Technque for Wnd Speed Predcton Yusuf S. Turkan 1 and Hacer Yumurtacı

More information

3. Stress-strain relationships of a composite layer

3. Stress-strain relationships of a composite layer OM PO I O U P U N I V I Y O F W N ompostes ourse 8-9 Unversty of wente ng. &ech... tress-stran reatonshps of a composte ayer - Laurent Warnet & emo Aerman.. tress-stran reatonshps of a composte ayer Introducton

More information

A General Column Generation Algorithm Applied to System Reliability Optimization Problems

A General Column Generation Algorithm Applied to System Reliability Optimization Problems A Genera Coumn Generaton Agorthm Apped to System Reabty Optmzaton Probems Lea Za, Davd W. Cot, Department of Industra and Systems Engneerng, Rutgers Unversty, Pscataway, J 08854, USA Abstract A genera

More information

A MIN-MAX REGRET ROBUST OPTIMIZATION APPROACH FOR LARGE SCALE FULL FACTORIAL SCENARIO DESIGN OF DATA UNCERTAINTY

A MIN-MAX REGRET ROBUST OPTIMIZATION APPROACH FOR LARGE SCALE FULL FACTORIAL SCENARIO DESIGN OF DATA UNCERTAINTY A MIN-MAX REGRET ROBST OPTIMIZATION APPROACH FOR ARGE SCAE F FACTORIA SCENARIO DESIGN OF DATA NCERTAINTY Travat Assavapokee Department of Industra Engneerng, nversty of Houston, Houston, Texas 7704-4008,

More information

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory Proceedngs of the 2009 IEEE Internatona Conference on Systems Man and Cybernetcs San Antono TX USA - October 2009 Mutspectra Remote Sensng Image Cassfcaton Agorthm Based on Rough Set Theory Yng Wang Xaoyun

More information

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS A DIMESIO-REDUCTIO METHOD FOR STOCHASTIC AALYSIS SECOD-MOMET AALYSIS S. Rahman Department of Mechanca Engneerng and Center for Computer-Aded Desgn The Unversty of Iowa Iowa Cty, IA 52245 June 2003 OUTLIE

More information

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques Energes 20, 4, 73-84; do:0.3390/en40073 Artce OPEN ACCESS energes ISSN 996-073 www.mdp.com/journa/energes Short-Term Load Forecastng for Eectrc Power Systems Usng the PSO-SVR and FCM Custerng Technques

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Cyclic Codes BCH Codes

Cyclic Codes BCH Codes Cycc Codes BCH Codes Gaos Feds GF m A Gaos fed of m eements can be obtaned usng the symbos 0,, á, and the eements beng 0,, á, á, á 3 m,... so that fed F* s cosed under mutpcaton wth m eements. The operator

More information

QUARTERLY OF APPLIED MATHEMATICS

QUARTERLY OF APPLIED MATHEMATICS QUARTERLY OF APPLIED MATHEMATICS Voume XLI October 983 Number 3 DIAKOPTICS OR TEARING-A MATHEMATICAL APPROACH* By P. W. AITCHISON Unversty of Mantoba Abstract. The method of dakoptcs or tearng was ntroduced

More information

The line method combined with spectral chebyshev for space-time fractional diffusion equation

The line method combined with spectral chebyshev for space-time fractional diffusion equation Apped and Computatona Mathematcs 014; 3(6): 330-336 Pubshed onne December 31, 014 (http://www.scencepubshnggroup.com/j/acm) do: 10.1164/j.acm.0140306.17 ISS: 3-5605 (Prnt); ISS: 3-5613 (Onne) The ne method

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Active Learning with Support Vector Machines for Tornado Prediction

Active Learning with Support Vector Machines for Tornado Prediction Actve Learnng wth Support Vector Machnes for Tornado Predcton Theodore B. Trafas, Indra Adranto, and Mchae B. Rchman Schoo of Industra Engneerng, Unversty of Okahoma, 0 West Boyd St, Room 4, Norman, OK

More information

Boundary Value Problems. Lecture Objectives. Ch. 27

Boundary Value Problems. Lecture Objectives. Ch. 27 Boundar Vaue Probes Ch. 7 Lecture Obectves o understand the dfference between an nta vaue and boundar vaue ODE o be abe to understand when and how to app the shootng ethod and FD ethod. o understand what

More information

Part II. Support Vector Machines

Part II. Support Vector Machines Part II Support Vector Machnes 35 Chapter 5 Lnear Cassfcaton 5. Lnear Cassfers on Lnear Separabe Data As a frst step n understandng and constructng Support Vector Machnes e stud the case of near separabe

More information

Polite Water-filling for Weighted Sum-rate Maximization in MIMO B-MAC Networks under. Multiple Linear Constraints

Polite Water-filling for Weighted Sum-rate Maximization in MIMO B-MAC Networks under. Multiple Linear Constraints 2011 IEEE Internatona Symposum on Informaton Theory Proceedngs Pote Water-fng for Weghted Sum-rate Maxmzaton n MIMO B-MAC Networks under Mutpe near Constrants An u 1, Youjan u 2, Vncent K. N. au 3, Hage

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Development of whole CORe Thermal Hydraulic analysis code CORTH Pan JunJie, Tang QiFen, Chai XiaoMing, Lu Wei, Liu Dong

Development of whole CORe Thermal Hydraulic analysis code CORTH Pan JunJie, Tang QiFen, Chai XiaoMing, Lu Wei, Liu Dong Deveopment of whoe CORe Therma Hydrauc anayss code CORTH Pan JunJe, Tang QFen, Cha XaoMng, Lu We, Lu Dong cence and technoogy on reactor system desgn technoogy, Nucear Power Insttute of Chna, Chengdu,

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION European Journa of Mathematcs and Computer Scence Vo. No. 1, 2017 ON AUTOMATC CONTNUTY OF DERVATONS FOR BANACH ALGEBRAS WTH NVOLUTON Mohamed BELAM & Youssef T DL MATC Laboratory Hassan Unversty MORO CCO

More information

Neural Networks. Incremental learning for ν-support Vector Regression

Neural Networks. Incremental learning for ν-support Vector Regression Neura Networks 67 (25) 4 5 Contents sts avaabe at cencedrect Neura Networks journa homepage: www.esever.com/ocate/neunet Incrementa earnng for ν-upport Vector Regresson Bn Gu a,b,c,d,, Vctor. heng e, Zhje

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Lower bounds for the Crossing Number of the Cartesian Product of a Vertex-transitive Graph with a Cycle

Lower bounds for the Crossing Number of the Cartesian Product of a Vertex-transitive Graph with a Cycle Lower bounds for the Crossng Number of the Cartesan Product of a Vertex-transtve Graph wth a Cyce Junho Won MIT-PRIMES December 4, 013 Abstract. The mnmum number of crossngs for a drawngs of a gven graph

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

ON THE BEHAVIOR OF THE CONJUGATE-GRADIENT METHOD ON ILL-CONDITIONED PROBLEMS

ON THE BEHAVIOR OF THE CONJUGATE-GRADIENT METHOD ON ILL-CONDITIONED PROBLEMS ON THE BEHAVIOR OF THE CONJUGATE-GRADIENT METHOD ON I-CONDITIONED PROBEM Anders FORGREN Technca Report TRITA-MAT-006-O Department of Mathematcs Roya Insttute of Technoogy January 006 Abstract We study

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization Journa of Machne Learnng Research 18 17 1-5 Submtted 9/16; Revsed 1/17; Pubshed 1/17 A Genera Dstrbuted Dua Coordnate Optmzaton Framework for Reguarzed Loss Mnmzaton Shun Zheng Insttute for Interdscpnary

More information

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM Journa of Theoretca and Apped Informaton Technoogy th February 3. Vo. 48 No. 5-3 JATIT & LLS. A rghts reserved. ISSN: 99-8645 www.att.org E-ISSN: 87-395 IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED

More information

A Derivative-Free Algorithm for Bound Constrained Optimization

A Derivative-Free Algorithm for Bound Constrained Optimization Computatona Optmzaton and Appcatons, 21, 119 142, 2002 c 2002 Kuwer Academc Pubshers. Manufactured n The Netherands. A Dervatve-Free Agorthm for Bound Constraned Optmzaton STEFANO LUCIDI ucd@ds.unroma.t

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Support Vector Machines for Classification and Regression

Support Vector Machines for Classification and Regression ISIS Technca Report Support Vector Machnes for Cassfcaton and Regresson Steve Gunn 0 November 997 Contents Introducton 3 2 Support Vector Cassfcaton 4 2. The Optma Separatng Hyperpane...5 2.. Lneary Separabe

More information

Quantum Runge-Lenz Vector and the Hydrogen Atom, the hidden SO(4) symmetry

Quantum Runge-Lenz Vector and the Hydrogen Atom, the hidden SO(4) symmetry Quantum Runge-Lenz ector and the Hydrogen Atom, the hdden SO(4) symmetry Pasca Szrftgser and Edgardo S. Cheb-Terrab () Laboratore PhLAM, UMR CNRS 85, Unversté Le, F-59655, France () Mapesoft Let's consder

More information

An Effective Space Charge Solver. for DYNAMION Code

An Effective Space Charge Solver. for DYNAMION Code A. Orzhehovsaya W. Barth S. Yaramyshev GSI Hemhotzzentrum für Schweronenforschung (Darmstadt) An Effectve Space Charge Sover for DYNAMION Code Introducton Genera space charge agorthms based on the effectve

More information

Solution of a nonsymmetric algebraic Riccati equation from a one-dimensional multistate transport model

Solution of a nonsymmetric algebraic Riccati equation from a one-dimensional multistate transport model IMA Journa of Numerca Anayss (2011) 1, 145 1467 do:10.109/manum/drq04 Advance Access pubcaton on May 0, 2011 Souton of a nonsymmetrc agebrac Rccat equaton from a one-dmensona mutstate transport mode TIEXIANG

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Journal of Multivariate Analysis

Journal of Multivariate Analysis Journa of Mutvarate Anayss 3 (04) 74 96 Contents sts avaabe at ScenceDrect Journa of Mutvarate Anayss journa homepage: www.esever.com/ocate/jmva Hgh-dmensona sparse MANOVA T. Tony Ca a, Yn Xa b, a Department

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

we have E Y x t ( ( xl)) 1 ( xl), e a in I( Λ ) are as follows:

we have E Y x t ( ( xl)) 1 ( xl), e a in I( Λ ) are as follows: APPENDICES Aendx : the roof of Equaton (6 For j m n we have Smary from Equaton ( note that j '( ( ( j E Y x t ( ( x ( x a V ( ( x a ( ( x ( x b V ( ( x b V x e d ( abx ( ( x e a a bx ( x xe b a bx By usng

More information

A Novel Hierarchical Method for Digital Signal Type Classification

A Novel Hierarchical Method for Digital Signal Type Classification Proceedngs of the 6th WSEAS Internatona Conference on Apped Informatcs and Communcatons, Eounda, Greece, August 8-0, 006 (pp388-393) A Nove Herarchca Method for Dgta Sgna ype Cassfcaton AAOLLAH EBRAHIMZADEH,

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Approximate merging of a pair of BeÂzier curves

Approximate merging of a pair of BeÂzier curves COMPUTER-AIDED DESIGN Computer-Aded Desgn 33 (1) 15±136 www.esever.com/ocate/cad Approxmate mergng of a par of BeÂzer curves Sh-Mn Hu a,b, *, Rou-Feng Tong c, Tao Ju a,b, Ja-Guang Sun a,b a Natona CAD

More information

GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION. Machine Learning

GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION. Machine Learning CHAPTER 3 GENERATIVE AND DISCRIMINATIVE CLASSIFIERS: NAIVE BAYES AND LOGISTIC REGRESSION Machne Learnng Copyrght c 205. Tom M. Mtche. A rghts reserved. *DRAFT OF September 23, 207* *PLEASE DO NOT DISTRIBUTE

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

Key words. corner singularities, energy-corrected finite element methods, optimal convergence rates, pollution effect, re-entrant corners

Key words. corner singularities, energy-corrected finite element methods, optimal convergence rates, pollution effect, re-entrant corners NESTED NEWTON STRATEGIES FOR ENERGY-CORRECTED FINITE ELEMENT METHODS U. RÜDE1, C. WALUGA 2, AND B. WOHLMUTH 2 Abstract. Energy-corrected fnte eement methods provde an attractve technque to dea wth eptc

More information

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System.

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System. Part 7: Neura Networ & earnng /2/05 Superved earnng Neura Networ and Bac-Propagaton earnng Produce dered output for tranng nput Generaze reaonaby & appropratey to other nput Good exampe: pattern recognton

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Perfect Competition and the Nash Bargaining Solution

Perfect Competition and the Nash Bargaining Solution Perfect Competton and the Nash Barganng Soluton Renhard John Department of Economcs Unversty of Bonn Adenauerallee 24-42 53113 Bonn, Germany emal: rohn@un-bonn.de May 2005 Abstract For a lnear exchange

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

A parametric Linear Programming Model Describing Bandwidth Sharing Policies for ABR Traffic

A parametric Linear Programming Model Describing Bandwidth Sharing Policies for ABR Traffic parametrc Lnear Programmng Mode Descrbng Bandwdth Sharng Poces for BR Traffc I. Moschoos, M. Logothets and G. Kokknaks Wre ommuncatons Laboratory, Dept. of Eectrca & omputer Engneerng, Unversty of Patras,

More information

Ths artce was pubshed n an Esever journa. The attached copy s furnshed to the author for non-commerca research and educaton use, ncudng for nstructon at the author s nsttuton, sharng wth coeagues and provdng

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

LOW-DENSITY Parity-Check (LDPC) codes have received

LOW-DENSITY Parity-Check (LDPC) codes have received IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 7, JULY 2011 1807 Successve Maxmzaton for Systematc Desgn of Unversay Capacty Approachng Rate-Compatbe Sequences of LDPC Code Ensembes over Bnary-Input

More information

Optimization of JK Flip Flop Layout with Minimal Average Power of Consumption based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA

Optimization of JK Flip Flop Layout with Minimal Average Power of Consumption based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA Journa of mathematcs and computer Scence 4 (05) - 5 Optmzaton of JK Fp Fop Layout wth Mnma Average Power of Consumpton based on ACOR, Fuzzy-ACOR, GA, and Fuzzy-GA Farshd Kevanan *,, A Yekta *,, Nasser

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints Internatona Journa Optma of Contro, Guaranteed Automaton, Cost Contro and Systems, of Lnear vo Uncertan 3, no Systems 3, pp 397-4, wth Input September Constrants 5 397 Optma Guaranteed Cost Contro of Lnear

More information

Accelerated gradient methods and dual decomposition in distributed model predictive control

Accelerated gradient methods and dual decomposition in distributed model predictive control Deft Unversty of Technoogy Deft Center for Systems and Contro Technca report 12-011-bs Acceerated gradent methods and dua decomposton n dstrbuted mode predctve contro P. Gsesson, M.D. Doan, T. Kevczky,

More information

Nested case-control and case-cohort studies

Nested case-control and case-cohort studies Outne: Nested case-contro and case-cohort studes Ørnuf Borgan Department of Mathematcs Unversty of Oso NORBIS course Unversty of Oso 4-8 December 217 1 Radaton and breast cancer data Nested case contro

More information

L-Edge Chromatic Number Of A Graph

L-Edge Chromatic Number Of A Graph IJISET - Internatona Journa of Innovatve Scence Engneerng & Technoogy Vo. 3 Issue 3 March 06. ISSN 348 7968 L-Edge Chromatc Number Of A Graph Dr.R.B.Gnana Joth Assocate Professor of Mathematcs V.V.Vannaperuma

More information

n-step cycle inequalities: facets for continuous n-mixing set and strong cuts for multi-module capacitated lot-sizing problem

n-step cycle inequalities: facets for continuous n-mixing set and strong cuts for multi-module capacitated lot-sizing problem n-step cyce nequates: facets for contnuous n-mxng set and strong cuts for mut-modue capactated ot-szng probem Mansh Bansa and Kavash Kanfar Department of Industra and Systems Engneerng, Texas A&M Unversty,

More information

A Class of Distributed Optimization Methods with Event-Triggered Communication

A Class of Distributed Optimization Methods with Event-Triggered Communication A Cass of Dstrbuted Optmzaton Methods wth Event-Trggered Communcaton Martn C. Mene Mchae Ubrch Sebastan Abrecht the date of recept and acceptance shoud be nserted ater Abstract We present a cass of methods

More information

Note 2. Ling fong Li. 1 Klein Gordon Equation Probablity interpretation Solutions to Klein-Gordon Equation... 2

Note 2. Ling fong Li. 1 Klein Gordon Equation Probablity interpretation Solutions to Klein-Gordon Equation... 2 Note 2 Lng fong L Contents Ken Gordon Equaton. Probabty nterpretaton......................................2 Soutons to Ken-Gordon Equaton............................... 2 2 Drac Equaton 3 2. Probabty nterpretaton.....................................

More information

Reactive Power Allocation Using Support Vector Machine

Reactive Power Allocation Using Support Vector Machine Reactve Power Aocaton Usng Support Vector Machne M.W. Mustafa, S.N. Khad, A. Kharuddn Facuty of Eectrca Engneerng, Unverst Teknoog Maaysa Johor 830, Maaysa and H. Shareef Facuty of Eectrca Engneerng and

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems Mathematca Probems n Engneerng Voume 213 Artce ID 945342 7 pages http://dxdoorg/11155/213/945342 Research Artce H Estmates for Dscrete-Tme Markovan Jump Lnear Systems Marco H Terra 1 Gdson Jesus 2 and

More information

An Augmented Lagrangian Coordination-Decomposition Algorithm for Solving Distributed Non-Convex Programs

An Augmented Lagrangian Coordination-Decomposition Algorithm for Solving Distributed Non-Convex Programs An Augmented Lagrangan Coordnaton-Decomposton Agorthm for Sovng Dstrbuted Non-Convex Programs Jean-Hubert Hours and Con N. Jones Abstract A nove augmented Lagrangan method for sovng non-convex programs

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information