MaxMinOver Regression: A Simple Incremental Approach for Support Vector Function Approximation

Size: px
Start display at page:

Download "MaxMinOver Regression: A Simple Incremental Approach for Support Vector Function Approximation"

Transcription

1 MaxMnOver Regresson: A Smple Incremental Approach for Support Vector Functon Approxmaton Danel Schneegaß,2,KaLabusch, and Thomas Martnetz Insttute for Neuro- and Bonformatcs Unversty at Lübeck, D Lübeck, Germany martnetz@nformatk.un-luebeck.de 2 Informaton & Communcatons, Learnng Systems Semens AG, Corporate Technology, D-8739 Munch, Germany danel.schneegass.ext@semens.com Abstract. The well-known MnOver algorthm s a smple modfcaton of the perceptron algorthm and provdes the maxmum margn classfer wthout a bas n lnearly separable two class classfcaton problems. In [] and [2] we presented DoubleMnOver and MaxMnOver as extensons of MnOver whch provde the maxmal margn soluton n the prmal and the Support Vector soluton n the dual formulaton by dememorsng non Support Vectors. These two approaches were augmented to soft margns based on the ν-svm and the C2-SVM. We extended the last approach to SoftDoubleMaxMnOver [3] and fnally ths method leads to a Support Vector regresson algorthm whch s as effcent and ts mplementaton as smple as the C2-SoftDoubleMaxMnOver classfcaton algorthm. Introducton The Support-Vector-Machne SVM [4], [5] s a very effcent, unversal and powerful tool for classfcaton and regresson tasks e.g. [6], [7], [8]. A major drawback, partcularly for ndustral applcatons where easy and robust mplementaton s an ssue, s ts complcated tranng procedure. A large Quadratc-Programmng problem has to be solved, whch requres sophstcated numercal optmsaton routnes whch many users do not want or cannot mplement by themselves. They have to rely on exstng software packages, whch are hardly comprehensve and, n some cases at least, error-free. Ths s n contrast to most Neural Network approaches where learnng has to be smple and ncremental almost by defnton. The teratve and ncremental nature of learnng n Neural Networks usually leads to smple tranng procedures whch can easly be mplemented. It s desrable to have smlar tranng procedures also for the SVM. Several approaches for obtanng more or less smple ncremental tranng procedures for Support Vector classfcaton and regresson have been ntroduced so far [9], [0], [], [2]. We want to menton n partcular the Kernel-Adatron by Fress, Crstann, and Campbell [9] and the Sequental-Mnmal-Optmsaton algorthm SMO by Platt [0]. Ths s the most wdespread teratve tranng procedure for SVMs. It s fast and robust, but t s stll not yet of a pattern-by-pattern nature and well suted as a S. Kollas et al. Eds.: ICANN 2006, Part I, LNCS 43, pp , c Sprnger-Verlag Berln Hedelberg 2006

2 MaxMnOver Regresson 5 startng pont for an onlne learnng scheme. It s not yet as easy to mplement as one s used from Neural Network approaches. Accordng to ths goal n [2] and [] we had revsted and extended the MnOver algorthm. The so-called DoubleMaxMnOver [3] method whch we brefly revst n ths paper as the combnaton of DoubleMnOver and MaxMnOver converges provable to the Support Vector soluton n the dual representaton for classfcaton tasks. Moreover, the angle γ t between the correct soluton w and the nterm soluton w t after t steps s bounded by Ot. Furthermore we ntroduced a soft margn approach based on the C2-SVM error model. In a straghtforward way ths model can be adapted for regresson [3]. We consequently show that we smlarly obtan our regresson algorthm as an extenson of DoubleMaxMnOver for classfcaton as well. We gve a proof for ts convergence propertes and consder benchmark results. 2 The Support Vector Machne The SVM represents a lnear classfer or functon approxmator, respectvely. For gven sets of nput vectors X = {x,...,x l } and output vectors Y = {y,...,y l } the SV soluton for the nference of fx =y s always the best one n the sense that n classfcaton the margn between the separatng hyperplane and the two classes s largest and n regresson the obtaned curve s the flattest one. In both cases one only needs a few nput vectors, the so-called Support Vectors, to descrbe the soluton. Ths property s very mportant. The lower the number of Support Vectors, the better the expected generalsaton capablty [4]. The goal s to solve the optmsaton problem 2 wt w =mn {,...,l} : y w T x b n classfcaton, whch s apparently equvalent to the maxmsaton of the margn holdng w,and 2 wt w =mn {,...,l} : w T x b y ɛ. n regresson tasks. Note that classfcaton and regresson are geometrcally analogous n the followng sense. Whle n classfcaton the Support Vectors are the ones lyng on the margn, that s havng smallest dstance to the separatng hyperplane, n regresson the Support Vectors le on the edges of the ɛ-tube around the functon values. In classfcaton all other vectors are located more or less far away from the margn. In regresson they le wthn the ɛ-tube. Hence SV classfcaton can be seen as the perspectve from nsde to outsde the hyperplane area whle SV regresson s the other way around.

3 52 D. Schneegaß, K. Labusch, and T. Martnetz An mportant advantage of SVMs s furthermore that the classfer or functon approxmator fx =w T x b can be formulated n terms of the observatons,.e. as l fx = = α x T x b. Therefore, t s possble to replace the nner product x T z by a postve defnte Kernel Kx, z = Φx, Φz whch leads to the mplct use of arbtrary feature spaces, whose defnton has not necessarly to be known, makng profoundly non-lnear problems lnear. 3 The C2-SoftDoubleMaxMnOver Algorthm for Classfcaton The DoubleMnOver algorthm [] as the frst extenson of MnOver s a smple maxmal margn learnng machne. Startng from any nterm soluton w t the method selects two vectors each of both classes havng the smallest margn to the hyperplane defned by V = {v w T v b =0}, thats a,mn =argmn,y=a afx, respectvely. Afterwards the Lagrange-coeffcents α wll be ncreased by to change the drecton of the weght vector gven by x : fx =w T x = l = y α Kx, x towards these two nput vectors and ncrease ther margns. By nducton t can be seen that w t ndeed tends to w whle t. It has been shown [5] that the angle γ t between the correct soluton w and the nterm soluton w t after t steps s bounded by Ot. Furthermore the tme complexty per teraton s Ol, wherel s the number of nput vectors. In order to acheve the dual SV soluton, not to decrease the convergence speed and to hold the optmsaton constrants, n DoubleMaxMnOver the vectors wth the largest margn wll be decreased, f t s known that they cannot be Support Vectors whle the ones wth smallest margn wll be ncreased twce. In addton we ntroduced a soft margn approach workng wth an extended kernel K x, x j =Kx, x j + δ,j C. It can be seen that ndeed the data remans lnearly separable by constructon [3] wthn ths extended feature space. Later we wll see that all these statements can straghtforwardly be adopted for our regresson approach. The combnaton of ths soft margn, the DoubleMnOver, and the MaxMnOver approaches fnally leads to the C2-SoftDoubleMaxMnOver [,2,3] method, whch s as fast as the LbSVM toolbox on standard benchmarks and performs comparable results. 4 The MaxMnOver Regresson Approach More general than the classfcaton task s the one of regresson. Now the goal s to approxmate a functon gx by usng fx = l α Kx, x b. = Ths can be nterpreted as lnear regresson wthn an approprate feature space, agan defned by the Kernel K. To get a scope for the mnmsaton of w the user defnes

4 MaxMnOver Regresson 53 an ɛ-tube around the real functon values y = gx. As n our soft margn classfcaton approach we consequently use the C2-SVM model for regresson. If C the Support Vector soluton s n a way the smplest or the flattest one whle makng no more error than ɛ. The Support Vectors are the vectors wth Lagrange-coeffcents α 0 lyng on the edges of the ɛ-tube. The ntroducton of a fnte C s not only necessary as regularsaton parameter, but also f t cannot be guaranteed that a regresson wth an errorofatmostɛ s achevable at all. Fortunately the geometrcal nterpretaton s smlar to the one n the C2-SVM classfcaton, so t s not dffcult to see that also n the regresson case an unequvocal Support Vector soluton exsts, whch can be nterpreted Algorthm. The MaxMnOver Regresson Algorthm Requre: gven set of normed nput vectors X = {x,y, {,...,l}} wth normed functon values and parameters C and ɛ Ensure: calculates the mnmal hyperplane regressng the nput data gven n dual representaton set : α 0, t 0, f x P l j= αjkxj, x+ α C q R max,j,y =,y j = Kx, x 2Kx, x j+kx j, x j whle the desred precson s not reached do set t t + fnd mn = arg mn fx y fnd max = arg max fx y fnd mnnsv = arg max,α >0 fx y fnd maxnsv = arg mn,α <0 fx y f fx max y max fx mn y mn > 2ɛ then f fx mnn SV y mnn SV fx mn y mn > 4R ɛ 2 +4ɛ then t set α mn α mn + +mn t t,α mnn SV set α mnn SV α mnn SV mn t,α mnn SV else set α mn α mn + t end f f fx max y max fx maxn SV y maxn SV > 4R ɛ 2 +4ɛ then set α max α max t mn t, α maxn SV set α maxn SV α maxn SV +mn t, α maxn SV else set α max α max t end f else f fx mn y mn fx mnn SV y mnn SV > 4R ɛ 2 +4ɛ set α mnn SV α mnn SV mn t,α mnn SV end f f fx max y max fx maxn SV y maxn SV > 4R ɛ 2 +4ɛ set α maxn SV α maxn SV +mn t, α maxn SV end f end f end whle set b 2 fx mn y mn +fxmax y max t t t then then

5 54 D. Schneegaß, K. Labusch, and T. Martnetz as the soluton wth at most ɛ error wthn the extended feature space defned by K x, z =Kx, z + δx,z C. Ths s as reasonable as the statement that the precondtoned lnear equaton system K + C E l α = y, whch s used n kernalzed Rdge Regresson [3], has a soluton for all but fntely many C<, even f the bas b =0 and ɛ =0. As a frst approach consder the drect adaptaton of the DoubleMnOver algorthm. As n the classfcaton problem, where we choose one nput vector each of both classes wth the smallest margn, we now choose the two nput vectors makng the largest postve and negatve regresson error, respectvely. Apparently, usng an approprate step wdth, the algorthm converges to any feasble soluton, that s all nput vectors le wthn the ɛ-tube and the above constrant holds true by constructon n each teraton. Stll f the Lagrange-coeffcent of any of the nput vectors wll be ncluded faulty as potental Support Vectors, we need an nstrumentaton to turn back ths decson, f they are ndeed non Support Vectors. Once agan as n the classfcaton task, where we choose the nput vectors each of both classes wth the largest margn, we now choose the two vectors lyng furthest away from ts edge of the ɛ-tube and dememorse them by controllng the Lagrange-coeffcents back wthout loosng convergence speed see algorthm. Consequently ths s the MaxMnOver Regresson algorthm. The dffcult lookng dememorsaton crteron wll be explaned n the next secton. 4. On the Convergence of MaxMnOver Regresson Fg.. Constructon of the classfcaton problem. The black dots are the true samples whch we want to approxmate. By shftng them orthogonal to the SVM ft curve n both senses of drecton and assgnng dfferent classes brght x and dark gray x dots we construct a classfcaton problem whose maxmal margn separatng hyperplane s exactly the regresson ft. In the followng wthout loss of generalty we assume C and suppose that a soluton wth no more error than ɛ exsts. Otherwse we choose an approprate C>0and reset K x, z =Kx, z+ δx,z C. Frst of all, the Support Vector soluton and only the

6 MaxMnOver Regresson 55 SV soluton s a fxed pont of the algorthm. Ths can be seen as follows. Once we have reached the SV soluton, the outer condton fx max y max fx mn y mn > ɛ + ɛ of the algorthm s evaluated to false, all ponts have to be wthn the ɛ-tube. The left hand sde of the nner condtons s always 0, because only Support Vectors have non-zero Lagrange coeffcents and all Support Vectors le exactly on the edge of the ɛ-tube, whle the rght hand sde s always postve. Hence nothng would change and the algorthm has fnshed. On the other hand, t can further be seen that no other confguraton of α can be a fxed pont. If any par of vectors le outsde of the ɛ-tube, then the outer condton would be fulflled. Otherwse there has to be at least one vector wthn the ɛ-tube and not on ts edge havng non-zero Lagrange coeffcent and therefore leadng to a postve fulfllment of the nner condton after a fnte number of teratons. Now we show that the MaxMnOver Regresson algorthm ndeed approaches the Support Vector soluton, by ascrbng the regresson problem to a classfcaton problem. We construct the classfcaton problem wthn the extended dmz+-dmensonal space, where Z s the feature space and the addtonal dmenson represents the functon values y, and demonstrate that the classfcaton and the regresson algorthms both work unformly n the lmt. Let X = {x,...,x l } be the set of nput vectors wth ts functon values Y = {y,...,y n } and w the Support Vector soluton of the regresson problem. Of course, f one uses a non-lnear feature space, then x has to be replaced by Φx or Φ x, respectvely. We construct the set ˆX = {x, x,...,x l, x l } from X by substtutng x = x + +ɛ w y w 2 + x x = +ɛ w y w 2 + and assgnng the classes y a = a. The Support Vector soluton of ths classfcaton w problem s ŵ = apart from ts norm wth functonal margn, because mn,y max yŵ T x y b b =mn,y max yw T x y b b + + ɛ 2 = ɛ ++ɛ = and for each vector ˆv ŵ wth ˆv = ŵ t holds both ˆv T ŵ +ɛ w 2 + < ŵ T ŵ +ɛ w 2 + =+ɛ compare eqn. part 2 and at least for the Support Vectors ether v T x y b ɛ or y v T x + b ɛ usng any bas b compare eqn. part, because w s the Support Vector soluton of the actual regresson problem.

7 56 D. Schneegaß, K. Labusch, and T. Martnetz Now suppose an nterm soluton w. Then MaxMnOver for classfcaton and for regresson choose the same vectors n the next teraton. It holds for the frst two fndstatements of algorthm arg mn w T x y =argmn w T x y + S 3 =argmnŵ T x arg max w T x y = arg max =argmax =argmn w T x y S 4 ŵ T x ŵ T x wth S = + ɛwt w + w 2 + and further for the the second two statements concernng the dememorsaton of non Support Vectors arg max,α wt x y = arg max w T x y + S 5 >0,α >0 =arg max,α ŵt x >0 arg mn,α wt x y =arg mn <0,α <0 =arg mn,α ŵt x <0 =arg max ŵ T x,α. <0 w T x y S 6 Note that the constrants α 0, respectvely α 0 mplctly hold for eq. 3, respectvely eq. 4 whle for eqn. 5 and 6 t s necessary to choose only potental Support Vectors of the correct classes for dememorsaton. The presented nterrelatonshp mples that, f the classfcaton algorthm wll ncrement or decrement some y α, then the regresson algorthm does so as well. But whle ŵ classfcaton ncreases lnearly n tme and converges to ŵ apart from ts norm, ŵ regresson must not ncrease arbtrarly. There exsts a tme t start <t, from whch on C,C 2 R : C t< ŵ classfcaton l+ <C 2 t wth C <C 2.Butfor ŵ regresson s ŵ regresson l+ = mplctly gven. Hence ŵ regresson has to be constraned and therefore we choose an approprate step wdth of order O t nstead of normalsng after each teraton. Although the harmonc seres dverges and hence any possble soluton vector wthn the feature space s reachable, we want to emphasze that the convergence speed can be tuned sgnfcantly by normalsng Y or choosng an approprate constant factor for the step wdth.

8 MaxMnOver Regresson 57 We derve the crteron for dememorsaton of non Support Vectors from the classfcaton method, where R 2 classfcaton max,j,y =,y j= x x j 2 has already been proven [,2]. As far ths crteron must only be an upper bound, we estmate Rregresson 2 =max x,j xj + +ɛ y w 2 + y j +ɛ w 2 + w w 2 Rclassfcaton ɛ2 +4ɛ + w 2 + R 2 classfcaton ɛ 2 +4ɛ. Note that we do not need to know the soluton w. Hence t s possble to apply ths estmaton n practce. 5 Expermental Results on Artfcal Data To evaluate the MaxMnOver Regresson algorthm we constructed randomly generated artfcal datasets and modfed the varance, the regresson error, the parameters C and ɛ, the number of teraton steps, the dstrbuton of the data ponts, the number of tranng examples, and the dmenson of the feature space. The table shows the averaged results of these evaluatons. It can be seen that the regresson error s comparable to the one acheved by the LbSVM Toolbox. The hgher the number of teratons the better s the performance of our method and the closer to the results of the LbSVM, whose step wdth was not changed. Only the number of Support Vectors s sometmes dfferent. More teraton steps n both methods should lead to a convergence of the number of Support Vectors to each other. Table. Averaged regresson results obtaned wth MaxMnOver Regresson B on artfcal data. For comparson averaged results obtaned wth the ɛ-svr of the LbSVM Toolbox A are lsted. The smple MaxMnOver Regresson algorthm acheves comparable results wth a few tranng steps. Capton: w norm of weght vector, L 2 squared error, L 2,ɛ squared error toleratng error ɛ, SV number of Support Vectors Parameters Results steps dstr number w A w B L 2,A L 2,B L 2,ɛ,A L 2,ɛ,B SV A SV B w A w B w A unform normal

9 58 D. Schneegaß, K. Labusch, and T. Martnetz 6 Conclusons Regresson s an mportant mathematcal problem whch occurs n a wde varety of practcal applcatons. Support Vector regresson acheves results wth a hgh generalsaton capablty. Dfferent complcated Support Vector regresson approaches have been ntroduced n the past. The man goal of ths paper s hence to show that even Support Vector regresson can be dealt wth n a smple way wth the MaxMnOver Regresson approach. In benchmarks the method acheves performances as good as the LbSVM Toolbox. Both, ts smplcty and ts good performance makes ths approach nterestng for mplementatons n ndustral envronments. References. Martnetz, T., Labusch, K., Schneegass, D.: Softdoublemnover: A smple procedure for maxmum margn classfcaton. Proc. of the Internatonal Conference on Artfcal Neural Networks Martnetz, T.: Maxmnover: A smple ncremental learnng procedure for support vector classfcaton. Proc. of the Internatonal Jont Conference on Neural Networks IEEE Press Schneegass, D., Martnetz, T., Clausohm, M.: Onlnedoublemaxmnover: A smple approxmate tme and nformaton effcent onlne support vector classfcaton method. Proc. of the European Symposum on Artfcal Neural Networks 2006 n preparaton 4. Cortes, C., Vapnk, V.: Support-vector networks. Machne Learnng Vapnk, V.: The Nature of Statstcal Learnng Theory. Sprnger-Verlag, New York LeCun, Y., Jackel, L., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker, H., Guyon, I., Muller, U., Sacknger, E., Smard, P., Vapnk, V.: Comparson of learnng algorthms for handwrtten dgt recognton. Int.Conf.on Artfcal Neural Networks Osuna, E., Freund, R., Gros, F.: Tranng support vector machnes:an applcaton to face detecton. CVPR Schölkopf, B.: Support vector learnng Fress, T., Crstann, N., Campbell, C.: The kernel adatron algorthm: a fast and smple learnng procedure for support vector machne. Proc. 5th Internatonal Conference on Machne Learnng Platt, J.: Fast Tranng of Support Vector Machnes usng Sequental Mnmal Optmzaton. In: Advances n Kernel Methods - Support Vector Learnng. MIT Press Keerth, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: A fast teratve nearest pont algorthm for support vector machne classfer desgn. IEEE-NN L, Y., Long, P.: The relaxed onlne maxmum margn algorthm. Machne Learnng Crstann, N., Shawe-Taylor, J.: Support Vector Machnes And Other Kernel-based Learnng Methods. Cambrdge Unversty Press, Cambrdge Vapnk, V.N.: Statstcal Learnng Theory. John Wley & Sons, Inc., New York Martnetz, T.: Mnover revsted for ncremental support-vector-classfcaton. Lecture Notes n Computer Scence

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012 Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

MinOver Revisited for Incremental Support-Vector-Classification

MinOver Revisited for Incremental Support-Vector-Classification MinOver Revisited for Incremental Support-Vector-Classification Thomas Martinetz Institute for Neuro- and Bioinformatics University of Lübeck D-23538 Lübeck, Germany martinetz@informatik.uni-luebeck.de

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

A NEW ALGORITHM FOR FINDING THE MINIMUM DISTANCE BETWEEN TWO CONVEX HULLS. Dougsoo Kaown, B.Sc., M.Sc. Dissertation Prepared for the Degree of

A NEW ALGORITHM FOR FINDING THE MINIMUM DISTANCE BETWEEN TWO CONVEX HULLS. Dougsoo Kaown, B.Sc., M.Sc. Dissertation Prepared for the Degree of A NEW ALGORITHM FOR FINDING THE MINIMUM DISTANCE BETWEEN TWO CONVEX HULLS Dougsoo Kaown, B.Sc., M.Sc. Dssertaton Prepared for the Degree of DOCTOR OF PHILOSOPHY UNIVERSITY OF NORTH TEXAS May 2009 APPROVED:

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Kristin P. Bennett. Rensselaer Polytechnic Institute

Kristin P. Bennett. Rensselaer Polytechnic Institute Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Statistical machine learning and its application to neonatal seizure detection

Statistical machine learning and its application to neonatal seizure detection 19/Oct/2009 Statstcal machne learnng and ts applcaton to neonatal sezure detecton Presented by Andry Temko Department of Electrcal and Electronc Engneerng Page 2 of 42 A. Temko, Statstcal Machne Learnng

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty

Additional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,

More information

Advanced Introduction to Machine Learning

Advanced Introduction to Machine Learning Advanced Introducton to Machne Learnng 10715, Fall 2014 The Kernel Trck, Reproducng Kernel Hlbert Space, and the Representer Theorem Erc Xng Lecture 6, September 24, 2014 Readng: Erc Xng @ CMU, 2014 1

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Beyond Zudilin s Conjectured q-analog of Schmidt s problem

Beyond Zudilin s Conjectured q-analog of Schmidt s problem Beyond Zudln s Conectured q-analog of Schmdt s problem Thotsaporn Ae Thanatpanonda thotsaporn@gmalcom Mathematcs Subect Classfcaton: 11B65 33B99 Abstract Usng the methodology of (rgorous expermental mathematcs

More information

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES

VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Fuzzy Boundaries of Sample Selection Model

Fuzzy Boundaries of Sample Selection Model Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #16 Scribe: Yannan Wang April 3, 2014 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #16 Scrbe: Yannan Wang Aprl 3, 014 1 Introducton The goal of our onlne learnng scenaro from last class s C comparng wth best expert and

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

On-line Support Vector Machine Regression

On-line Support Vector Machine Regression On-lne Support Vector Machne Regresson Maro Martn Software Department, Unverstat Poltècnca de Catalunya, Jord Grona -3, Campus Nord, C6 08034 Barcelona, Catalona, Span mmartn@lsupces Abstract Ths paper

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Exercise Solutions to Real Analysis

Exercise Solutions to Real Analysis xercse Solutons to Real Analyss Note: References refer to H. L. Royden, Real Analyss xersze 1. Gven any set A any ɛ > 0, there s an open set O such that A O m O m A + ɛ. Soluton 1. If m A =, then there

More information