Support Vector Machines for Classification and Regression

Size: px
Start display at page:

Download "Support Vector Machines for Classification and Regression"

Transcription

1 ISIS Technca Report Support Vector Machnes for Cassfcaton and Regresson Steve Gunn 0 November 997

2 Contents Introducton 3 2 Support Vector Cassfcaton 4 2. The Optma Separatng Hyperpane Lneary Separabe Exampe The Generased Optma Separatng Hyperpane Lneary Non-Separabe Exampe Generasaton n Hgh Dmensona Feature Space Poynoma Mappng Exampe Dscusson Feature Space 7 3. Kerne Functons Poynoma Gaussan Rada Bass Functon Exponenta Rada Bass Functon Mut-Layer Perceptron Fourer Seres Lnear Spnes B n spnes Tensor Product Spnes Impct vs. Expct Bas Data Normasaton Cassfcaton Exampe: IRIS data 20 5 Support Vector Regresson Lnear Regresson Exampe Non Lnear Regresson Case : The Separabe Case Poynoma Learnng Machne Exampe Regresson Exampe: Ttanum Data 3 7 Concusons 35 References 36 Appendx - Impementaton Issues 38 Support Vector Cassfcaton...38 Support Vector Regresson...4 Image Speech and Integent Systems Group

3 Introducton The foundatons of Support Vector Machnes (SVM) have been deveoped by Vapnk [9] and are ganng popuarty due to many attractve features, and promsng emprca performance. The formuaton embodes the Structura Rsk Mnmsaton (SRM) prncpe, whch n our work [8] has been shown to be superor to tradtona Emprca Rsk Mnmsaton (ERM) prncpe empoyed by conventona neura networks. SRM mnmses an upper bound on the generasaton error, as opposed to ERM whch mnmses the error on the tranng data. It s ths dfference whch equps SVMs wth a greater abty to generase, whch s our goa n statstca earnng. SVMs were deveoped to sove the cassfcaton probem, but recenty they have been extended to the doman of regresson probems [8]. In the terature the termnoogy for SVMs s sghty confusng. The term SVM s typcay used to descrbe cassfcaton wth support vector methods and support vector regresson s used descrbe regresson wth support vector methods. In ths report the term SVM w refer to both cassfcaton and regresson methods, and the terms Support Vector Cassfcaton (SVC) and Support Vector Regresson (SVR) w be used for specfcaton. The report starts wth an ntroducton to the structura rsk mnmsaton prncpe. The SVM s ntroduced n the settng of cassfcaton, beng both hstorca and more accessbe. Ths eads onto mappng the nput nto a hgher dmensona feature space by a sutabe choce of kerne functon. The report then consders the probem of regresson. To ustrate the propertes of the technques two exampes are gven. The VC dmenson s a scaar vaue that measures the capacty of a set of functons. The set of near ndcator functons has a VC dmenson equa to n+. The fgure ustrates how three ponts n the pane can be shattered by the set of near ndcator functons whereas four ponts cannot. In ths case the VC dmenson s equa to the number of free parameters, but n genera that s not the case. e.g. the functon Asn(bx) has an nfnte VC dmenson. Image Speech and Integent Systems Group

4 SRM prncpe creates a structure Image Speech and Integent Systems Group

5 2 Support Vector Cassfcaton The cassfcaton probem can be restrcted to consderaton of the two cass probem wthout oss of generaty. In ths probem the goa s to separate the two casses by a functon whch s nduced from avaabe exampes. The goa s to produce a cassfer whch w work we on unseen exampes,.e. t generases we. Consder the exampe n Fgure. Here there are many possbe near cassfers that can separate the data we, but there s ony one whch maxmses the margn (maxmses the dstance between t and the nearest data pont of each cass). Ths near cassfer s termed the optma separatng hyperpane. Intutvey, we woud expect ths boundary to generase we as opposed to the other possbe boundares. Fgure Optma Separatng Hyperpane 2. The Optma Separatng Hyperpane Consder the probem of separatng the set of tranng vectors beongng to two separate casses. wth a hyperpane n ( y, x ),,( y, x ), x R, y {, } + () ( w x) + b = 0 (2) The set of vectors s sad to be optmay separated by the hyperpane f t s separated wthout error and the dstance between the cosest vector to the hyperpane s maxma. There s some redundancy n Equaton (2), and wthout oss of generaty t s approprate to consder a canonca hyperpane [9], where the parameters w, b are constraned by, Image Speech and Integent Systems Group

6 mn x w + b = (3) x Ths ncsve constrant on the parametersaton s preferabe to aternatves n smpfyng the formuaton of the probem. In words t states that: the norm of the weght vector shoud be equa to the nverse of the dstance, of the nearest pont n the data set to the hyperpane. The dea s ustrated n Fgure 2. Fgure 2 Canonca Hyperpanes A separatng hyperpane n canonca form must satsfy the foowng constrants, [( w x ) ] y + b, =,, (4) The dstance d( w, b; x) of a pont x from the hyperpane ( ) d w x + b, ; = w ( w b x) w,b s, The optma hyperpane s gven by maxmsng the margn, ρ( w,b ), subject to the constrants of Equaton (4). The margn s gven by, ( ) ρ w, b = mn d w, b; x + mn d w, b; x ( ) ( j) { x : y = } { x : y = } j j (5) w x b w x b + j + = mn + mn { x : y = } w { x j : y j = } w = mn w x + b + mn w x j + b w { x : y = } { x j : y j = } (6) 2 = w Hence the hyperpane that optmay separates the data s the one that mnmses Image Speech and Integent Systems Group

7 ( ) Φ w = 2 w 2. (7) It s ndependent of b because provded Equaton (4) s satsfed (.e. t s a separatng hyperpane) changng b w move t n the norma drecton to tsef. Accordngy the margn remans unchanged but the hyperpane s no onger optma n that t w be nearer to one cass than the other. To consder how mnmsng Equaton (7) s equvaent to mpementng the SRM prncpe, suppose that the foowng bound hods, Then from Equaton (4) and (5), d w A. (8) ( w b x), ;. (9) A Accordngy the hyperpanes cannot be nearer than /A to any of the data ponts and ntutvey t can be seen n Fgure 3 how ths reduces the possbe hyperpanes, and hence the capacty. A Fgure 3 Constranng the Canonca Hyperpanes The VC dmenson, h, of the set of canonca hyperpanes n n dmensona space s, [ 2 2 ] h mn R A, n +, (0) where R s the radus of a hypersphere encosng a the data ponts. Hence mnmsng Equaton (7) s equvaent to mnmsng an upper bound on the VC dmenson. The souton to the optmsaton probem of Equaton (7) under the constrants of Equaton (4) s gven by the sadde pont of the Lagrange functona (Lagrangan) [0], {[ ] } L ( b 2 w,,α) = w α ( ) b y x w +. () 2 = Image Speech and Integent Systems Group

8 where α are the Lagrange mutpers. The Lagrangan has to be mnmsed wth respect to w, b and maxmsed wth respect to α 0. Cassca Lagrangan duaty enabes the prma probem, Equaton (), to be transformed to ts dua probem, whch s easer to sove. The dua probem s gven by, ( ) maxw α = max mn L( w, b, α) (2) α α w, b The mnmum wth respect to w and b of the Lagrangan, L, s gven by, L = 0 α y = 0 b = L = 0 w = w = α x y Hence from Equatons (), (2) and (3), the dua probem s, wth constrants, maxw α ( α) max α α j y y j ( j ) α 2 = j= =. (3) = x x + α (4) α 0, =,, = α y = 0 Sovng Equaton (4) wth constrants Equaton (5) determnes the Lagrange mutpers, α, and the optma separatng hyperpane s gven by, w = α x y = b = w x + x 2. [ r s ] where x r and x s are any support vector from each cass satsfyng, The cassfer s then, From the Kuhn-Tucker condtons, α (5) (5), α > 0, y =, y =. (7) r s r s f ( x) = ( w x + b ) sgn (8) [ y ( w x b ) ] α and hence ony for the ponts x whch satsfy, y + = 0 (9) ( w x b ) + =, (20) w the Lagrange mutpers be non-zero. These ponts are termed Support Vectors (SV). If the data s neary separabe a the support vectors w e on the margn and hence the number of SV s typcay very sma. Consequenty the hyperpane s Image Speech and Integent Systems Group

9 determned by a sma subset of the tranng set; the other ponts coud be removed from the tranng set and recacuatng the hyperpane woud produce the same answer. Hence SVM can be used to summarse the nformaton contaned n a data set by the SV produced. Ponts of nterest, 2 ( ) w = α = α = α α x x = SVs y y j j j SVs SVs Hence from Equaton (0) the VC dmenson of the cassfer s bounded by, 2 h mn R α, n +, (2) SVs and f the tranng data, x, s normased to e n the nterva [, ] 2.. Lneary Separabe Exampe h + n mn α,, (22) SVs X X 2 Cass Tabe Lneary Separabe Cassfcaton Data Gven the tranng set n Tabe, the SVC souton s shown n Fgure 4. The dotted nes descrbe the ocus of the margn and the crced data ponts represent the SV, whch a e on ths margn. n, Image Speech and Integent Systems Group

10 Fgure 4 Optma Separatng Hyperpane 2.2 The Generased Optma Separatng Hyperpane Fgure 5 Generased Optma Separatng Hyperpane So far the dscusson has been restrcted to the case where the tranng data s neary separabe. However, n genera ths w not be the case, Fgure 5. There are two approaches to generasng the probem, whch are dependent upon pror knowedge of the probem and an estmate of the nose on the data. In the case where t s expected (or possby even known) that a hyperpane can correcty separate the data, a method of ntroducng an addtona cost functon assocated wth mscassfcaton s approprate. To enabe the optma separatng hyperpane method to be generased, Cortes [5] ntroduced non-negatve varabes ξ 0 and a penaty functon, Image Speech and Integent Systems Group

11 F σ ( ) ξ = ξ, σ > 0, = where the ξ are a measure of the mscassfcaton error. The optmsaton probem s now posed so as to mnmse the cassfcaton error as we as mnmsng the VC dmenson of the cassfer. The constrants of Equaton (4) are modfed for the non-separabe case to, [( w x ) ] σ y + b ξ, =,, (23) where ξ 0. The generased optma separatng hyperpane s determned by the vector w, that mnmses the functona, 2 Φ( w,ξ) = w + C ξ, (24) 2 (where C s a gven vaue) subject to the constrants of Equaton (23). The souton to the optmsaton probem of Equaton (24) under the constrants of Equaton (23) s gven by the sadde pont of the Lagrangan [0], = {[ ] } L ( b w,,α) = ( w w) + C ξ ( ) b y α x w + + ξ βξ, (25) 2 = = = where α, β are the Lagrange mutpers. The Lagrangan has to be mnmsed wth respect to w, b, ξ and maxmsed wth respect to α, β 0. As before, cassca Lagrangan duaty enabes the prma probem, Equaton (25), to be transformed to ts dua probem. The dua probem s gven by, maxw max mn L w, b,, (26) α, β α, β w, b, ξ ( ) α = ( ξ α, β) The mnmum wth respect to w, b and ξ of the Lagrangan, L, s gven by, L b L w = 0 α y = 0 = = 0 w = α x y. (27) = L ξ = 0 α + β = C Hence from Equatons (25), (26) and (27), the dua probem s, wth constrants, maxw α ( α) max α α j y y j ( j ) α 2 = j= = = x x + α (28) Image Speech and Integent Systems Group

12 0 α C, =,, = α y The souton to ths mnmsaton probem s dentca to the separabe case except for a modfcaton of the bounds of the Lagrange mutpers. The uncertan part of Cortes s approach s that the coeffcent C has to be determned. In some crcumstances C can be drecty reated to a reguarsaton parameter [7,5]. Banz [4] uses a vaue of C=5, but utmatey C must be chosen to refect the knowedge of the nose on the data. Ths warrants further work, but a more practca dscusson s gven n Chapter Lneary Non-Separabe Exampe = 0 X X 2 Cass Tabe 2 Lneary Non-Separabe Cassfcaton Data Two addtona data ponts are added to the separabe data of Tabe to produce a neary non-separabe data set, Tabe 2. The SVC s shown n Fgure 6. The SV are no onger requred to e on the margn, as n Fgure 4.. (29) Fgure 6 Generased Optma Separatng Hyperpane Exampe (C=0) In contrast as C the souton converges towards the souton of obtaned by the optma separatng hyperpane, Fgure 7. Image Speech and Integent Systems Group

13 Fgure 7 Generased Optma Separatng Hyperpane Exampe (C= ) In the mt as C 0 the souton converges to??, Fgure 8. Fgure 8 Generased Optma Separatng Hyperpane Exampe (C=0-8 ) 2.3 Generasaton n Hgh Dmensona Feature Space In the case where a near boundary s napproprate the SVM can map the nput vector, x, nto a hgh dmensona feature space, z. By choosng a non-near mappng a pror, the SVM constructs an optma separatng hyperpane n ths hgher dmensona space, Fgure 9. The dea expots the method of [] whch enabes the curse of dmensonaty [3] to be addressed. Image Speech and Integent Systems Group

14 Input Space Output Space Feature Space Fgure 9 Mappng the Input Space nto a Hgh Dmensona Feature Space There are some restrctons on the of non-near mappng that can be empoyed, see Chapter 3, but t turns out that most commony empoyed functons are acceptabe. Among the acceptabe mappngs are poynomas, rada bass functons and certan sgmod functons. The optmsaton probem of Equaton (28) becomes, maxw α = max α α y y K x, x + α (30) α ( ) j j ( j) α 2 = j = = where K( x, y) s the kerne functon performng the non-near mappng nto feature space, and the constrants are unchanged, α 0, =,, = α y = 0 Sovng Equaton (30) wth constrants Equaton (3) determnes the Lagrange mutpers, α, and the cassfer mpementng the optma separatng hyperpane n the feature space s gven by, where. (3) f ( x) = sgn α y K( x, x) + b (57) SVs [ (, ) (, )] b = α y K x x + K x x 2 SVs r s (Aternatvey a more stabe way of computng b can be done [8]). (33) If the Kerne contans a bas term, b can be accommodated wthn the Kerne functon, and hence the cassfer s smpy, f ( x) = sgn y K( x x) SVs α,. (57) Many empoyed Kernes have a bas term and any fnte Kerne can be made to have one [7]. Note here that provded the Kerne contans a bas term the term b may be Image Speech and Integent Systems Group

15 dropped from the equaton of the hyperpane, smpfyng the optmsaton probem by removng the equaty constrant of Equaton (3). Chapter 3 dscusses n more deta the choce of Kerne functons and the condtons that are mposed Poynoma Mappng Exampe Consder the Kerne of the form, [ ] ( ) ( x y) K x, y = + 2. (35) Appyng the non-near SVC to the neary non-separabe tranng data of Tabe 2, produces the cassfcaton ustrated n Fgure 0 (C= ). The margn s no onger of constant wdth due to the non-near projecton nto the nput space. The souton s n contrast to Fgure 6-8, n that the tranng data s now cassfed correcty. However, even though SVM mpement the SRM prncpe and hence they shoud generase we, carefu choce of the kerne functon s necessary to produce a cassfcaton boundary that s topoogcay approprate. It s aways possbe to map the nput space nto a dmenson greater than the number of tranng ponts and produce a perfect cassfer on the tranng set. However, ths w generase bady. The choce of kerne warrants further nvestgaton. Fgure 0 Mappng nput space nto Poynoma Feature Space 2.4 Dscusson Typcay the data w ony be neary separabe n some, possby very hgh dmensona feature space. It may not make sense to try and separate the data exacty, partcuary when ony a fnte amount of tranng data whch s potentay corrupted by some knd of nose. Hence n practce t w be necessary to empoy the nonseparabe approach whch paces an upper bound on the Lagrange mutpers. Ths rases the queston of how to determne the parameter C. It s smar to the probem n Image Speech and Integent Systems Group

16 reguarsaton where the reguarsaton coeffcent has to be determned. Here the parameter can be determned by a process of cross-vadaton, and t may be possbe to mpement ths here. Note that removng the tranng patterns that are not support vectors w not change the souton and hence a fast method may be avaabe when the support vectors are sparse. Image Speech and Integent Systems Group

17 3 Feature Space 3. Kerne Functons The foowng theory s based upon Reproducng Kerne Hbert Spaces (RKHS) [2, 20, 7, 9]. The dea of the kerne functon s to perform the operatons n the nput space rather than the potentay hgh dmensona feature space. Hence the nner product does not need to be evauated n the feature space. A nner product n feature space has an equvaent kerne n nput space, ( x y) ( x) ( y) K, = k k, (36) provde certan condtons hod. Ths s approprate n our case f K s a symmetrc postve defnte functon, whch satsfes Mercer s Condtons, K ( x y) ( ) m x ( y), = α ψ ψ, α 0 m= 2 (, ) ( ) ( ) 0, ( ) K x y g x g y dxdy > g x dx < Popuar functons whch satsfy Mercer s condtons are, m (37) 3.. Poynoma K K ( x, y) = ( x y) ( ) ( x, y) ( x y) = + d d, d =,... (38) The second s preferabe as t avods probems wth the hessan becomng zero Gaussan Rada Bass Functon K( x, y) = exp ( x y) 2σ 2 2 (39) Cassca technques utsng rada bass functons empoy some method of determnng a subset of centres. Ths seecton s mpct when empoyed wthn an SVM. Ths oca functon s attractve n the sense that the non-zero support vectors each contrbute one oca Gaussan functon, centred at that data pont. By further consderatons t s possbe to seect the goba bass functon wdth, σ, usng the SRM prncpe [9] Exponenta Rada Bass Functon K( x, y) = exp Ths functon has s x y 2σ 2 (40) Image Speech and Integent Systems Group

18 3..4 Mut-Layer Perceptron For some vaues of b and c. ( ) ( x, y) tanh ( x y) K = b c (4) 3..5 Fourer Seres Fourer seres can be consdered an expanson n the foowng 2N+ dmensona feature space. The kerne s defned over the nterva K ( x, y) sn = sn ( N + )( x y 2 ) ( x y) ( 2 ) (42) 3..6 Lnear Spnes It s aso possbe to use nfnte spne kernes, ( ) ( ) ( x + y) ( ( x y) ) + ( x y) ( ) 2 3 K x, y = + xy + xy mn x, y mn, max, 2 3 Ths kerne s defned on the nterva [0,) ( [0,nf) ) (43) 3..7 B n spnes The kerne s defned on the nterva [-,] (, ) = ( ) K x y B2 n+ x y (44) 3..8 Tensor Product Spnes Mutdmensona spne kernes can be obtaned by formng tensor products, The n ( x, y) = ( x, y ) K K m m m m= (45) 3.2 Impct vs. Expct Bas The soutons wth an mpct bas and expct bas are not the same, whch may ntay come as a surprse. However, the dfference heps to hghght the probem wth the nterpretaton of generasaton n hgh dmensona feature spaces. Image Speech and Integent Systems Group

19 (a) Expct (near) (b) Impct (poynoma degree ) Fgure Comparson between Impct and Expct bas for a near kerne. Comparson of near wth expct bas aganst poynoma of degree wth mpct bas. 3.3 Data Normasaton If the data s not normased there may be too much emphass on one nput. If t s normased ths w effect the souton. Is ths effect predctabe? Aso some kernes are ony vad over a restrcted nterva and as such demand data normasaton. Emprca Observatons suggest that Normasaton aso mproves the condton number of the matrx n the optmsaton probem. Image Speech and Integent Systems Group

20 4 Cassfcaton Exampe: IRIS data The rs data set s a we known data set used for demonstratng the performance of cassfcaton agorthms. The data set contans four attrbutes of an rs, and the goa s to cassfy the cass of rs based on these four attrbutes. To smpfy the probem we restrct ourseves to the two features whch contan the most nformaton about the cass, namey the peta ength and the peta wdth. The dstrbuton of the data s ustrated n Fgure 2. Peta Wdth Vgnca Verscoor Setosa Peta Length Fgure 2 Irs data set Ths data set has been wdey used and [7] has apped SVM to t, whch provdes a means of verfyng the operaton of the software. The Setosa and Verscoor casses are easy separated wth a near boundary and the support vector souton usng an nner product kerne s ustrated n Fgure 3. Image Speech and Integent Systems Group

21 Fgure 3 Separatng Setosa wth a near SVM Here the support vectors are crced. It s evdent that there are ony two support vectors. These vectors contan the mportant nformaton about the cassfcaton boundary and hence suggest that the support vector machne can be used to extract the tranng patterns that contan the most nformaton for the cassfcaton probem. The separaton of the cass Vgnca from the other two casses s not so trva. In fact, two of the exampes are dentca n peta ength and wdth, but correspond to dfferent casses. Fgure 4 Separatng Vgnca wth a poynoma SVM (degree 2) Image Speech and Integent Systems Group

22 Fgure 5 Separatng Vgnca wth a poynoma SVM (degree 0) Fgure 6 Separatng Vgnca wth a Rada Bass Functon SVM (σ=.0) Fgure 7 Separatng Vgnca wth a poynoma SVM (degree 2, C=0) Image Speech and Integent Systems Group

23 (a) C = (b) C = 000 (c) C = 00 (d) C = 0 (e) C = (f) C = 0. (g) C = 0. 0 (h) C = Fgure 8 The effect of C on the separaton of Verscoor wth a near spne SVM Image Speech and Integent Systems Group

24 Fgure 8 ustrates how the cassfcaton boundary changes wth the parameter C whch contros the toerance to mscassfcaton errors. Interestngy, the range of vaues [0.,000] provde sensbe boundares, but to know whether an open boundary (e.g. Fgure 8(e)) or a cosed boundary (e.g. Fgure 8(c)) s more approprate woud requre pror knowedge about the probem under consderaton. It woud be expected that Fgure 3 and Fgure 7 woud gve reasonabe generasaton. [3] appes SVC to face recognton. Image Speech and Integent Systems Group

25 5 Support Vector Regresson SVM can be apped to regresson probems. The man dfference s the type of a oss functon empoyed. Fgure 9 ustrates three possbe oss functons (a) Quadratc (b) Least Moduus (c) ε-insenstve Fgure 9 Loss Functons Why s ths so mportant? because t makes the SVs sparse. Robust regresson n the sense of Huber. Data s aways non-separabe? [0,nf] [-Inf,0] cass [0,e] [-e,-0] regress (a) s equvaent to standard east squares error crteron. 5. Lnear Regresson Consder the probem of approxmatng the set of tranng vectors, wth a near functon, ( x ) ( x ) n y,,, y,, x R, y R (46) f ( x) = ( w x) + b (47) mnmse the functona, Φ( w, ξ, ξ) ( w w) ξ ξ, (48) * = + C + * 2 = = (where C s a gven vaue). The souton s gven by, wth constrants, * α ( y ε) α ( y + ε) * = max W( α, α ) = max * * α, α α, α 2 = j= * * ( α α )( α j α j )( x x j ) (49) Image Speech and Integent Systems Group

26 0 α C, =,, * 0 α C, =,,. (50) * ( α α) = = 0 Sovng Equaton (49) wth constrants Equaton (50) determnes the Lagrange mutpers, α, α *, and the regresson functon s gven by, where * ( α ) w = α = x b = w x + x 2 [ r s ] (Aternatvey a more stabe way of computng b can be done [smoa]) It can be shown that, f ( x) (5) = w x + b (55) * α α = 0, =,,. Therefore the support vectors are ponts where exacty one of the Lagrange mutpers s greater than zero. 5.. Exampe Gven the foowng tranng set, X Y Tabe 3 Regresson Data Image Speech and Integent Systems Group

27 5.2 Non Lnear Regresson Fgure 20 Lnear regresson The support vector machne maps the nput vector, x, nto a hgh dmensona feature space, z, through some non-near mappng, chosen a pror. In ths space an optma separatng hyperpane s constructed. (where C s a gven vaue). The souton s gven by, wth constrants, * α ε α ε * = max W( α, α ) = max * * α, α α, α 2 = j= 0 α C, =,, ( y ) ( y + ) * * ( α α )( α j α j ) K( x, x j ) (53) * 0 α C, =,,. (54) * ( α α) = = 0 f the kerne functon contans a bas term. Sovng Equaton (49) wth constrants Equaton (50) determnes the Lagrange mutpers, α, α *, and the regresson functon s gven by, where f = α α * K, + b ( x) ( ) ( x x) SVs [ ] b = K + K 2 ( α α * ) ( xr, x ) ( x s, x ) SVs (Aternatvey a more stabe way of computng b can be done [smoa]) (55). (56) Image Speech and Integent Systems Group

28 As wth the SVC the equaty constrant may be dropped f the Kerne contans a bas term, b beng accommodated wthn the Kerne functon, and the regresson functon s gven by, f α α *, SVs ( x) = ( ) K( x x). (57) 5.2. Case : The Separabe Case W ( ) j y y j K( x x j ) α = α α, α (58) 2 = j= = Poynoma Learnng Machne Exampe Gven the foowng tranng set, [ ] ( ) ( x x ) K x, x = + (59) d Fgure 2 Poynoma Regresson Image Speech and Integent Systems Group

29 Fgure 22 Rada Bass Functon Regresson Fgure 23 Infnte Spne Regresson Fgure 24 Infnte B-spne regresson Image Speech and Integent Systems Group

30 Fgure 25 Exponenta RBF Image Speech and Integent Systems Group

31 6 Regresson Exampe: Ttanum Data [2] has acheved exceent resuts appyng svms to tme seres from santa fe set?. [] has apped SVMs to tme seres modeng The exampe gven here consders the ttanum data [6] as an ustratve exampe for one dmensona non-near regresson. There two parameters to contro the regresson, C whch contros the Image Speech and Integent Systems Group

32 Fgure 26 Lnear Spne Regresson (ε=0.05) Fgure 27 B-Spne Regresson (ε=0.05) Fgure 28 Gaussan RBF Regresson (ε=0.05, σ=.0) Image Speech and Integent Systems Group

33 Fgure 29 Exponenta RBF Regresson (ε=0.05, σ=.0) Fgure 30 Fourer Regresson (ε=0.05, degree 3) Fgure 3 Gaussan RBF Regresson (ε=0.05, σ=0.3) Image Speech and Integent Systems Group

34 Fgure 32 Lnear Spne Regresson (ε=0.05, C=0) Fgure 33 B-Spne Regresson (ε=0.05, C=0) Image Speech and Integent Systems Group

35 7 Concusons Strong theoretca foundaton goba mnmum quadratc programmng margn n feature space -> nput space choce of C, e choce of kerne functon, mode msmatch nvarances curse of dmensonaty shft probem to tranng data sze. Image Speech and Integent Systems Group

36 References [] M. Azerman, E. Braverman and L. Rozonoer. Theoretca foundatons of the potenta functon method n pattern recognton earnng. Automaton and Remote Contro, 25:82-837, 964. [2] N. Aronszajn. Theory of Reproducng Kernes. Trans. Amer. Math. Soc., 68: , 950. [3] R. Beman. Dynamc Programmng. Prnceton Unversty Press, Prnceton, NJ, 957. [4] V. Banz, B. Schökopf, H. Büthoff, C. Burges, V. Vapnk, and T. Vetter Comparson of vew-based object recognton agorthms usng reastc 3D modes. In: C. von der Masburg, W. von Seeen, J. C. Vorbrüggen, and B. Sendhoff (eds.): Artfca Neura Networks - ICANN 96. Sprnger Lecture Notes n Computer Scence Vo. 2, Bern, [5] C. Cortes, and V. Vapnk Support Vector Networks. Machne Learnng 20: [6] P. Derckx. Curve and Surface Fttng wth Spnes. Monographs on Numerca Anayss, Carendon Press, Oxford, 993. [7] F. Gros. An Equvaence Between Sparse Approxmaton and Support Vector Machnes. Technca Report AIM-606, Artfca Integence Laboratory, Massachusetts Insttute of Technoogy (MIT), Cambrdge, Massachusetts, 997. [8] S. R. Gunn, M. Brown and K. M. Bossey. Network Performance Assessment for Neurofuzzy Data Modeng. Lecture Notes n Computer Scence, 280:33-323, 997. [9] N. E. Heckman. The Theory and Appcaton of Penazed Least Squares Methods or Reproducng Kerne Hbert Spaces Made Easy, 997. ftp://newton.stat.ubc.ca/pub/nancy/pls.ps [0] M. Mnoux. Mathematca Programmng: Theory and Agorthms. John Wey and Sons, 986. [] S. Mukherjee, E. Osuna, and F. Gros. Nonnear Predcton of Chaotc Tme Seres usng Support Vector Machnes. To appear n Proc. of IEEE NNSP 97, Amea Isand, FL, Sep., 997. [2] K. R. Müer, A. Smoa, G. Rätsch, B. Schökopf, J. Kohmorgen and V. Vapnk Predctng Tme Seres wth Support Vector Machnes. Submtted to: ICANN 97. Image Speech and Integent Systems Group

37 [3] E. Osuna, R. Freund and F. Gros. An Improved Tranng Agorthm for Support Vector Machnes. To appear n Proc. of IEEE NNSP 97, Amea Isand, FL, Sep., 997. [4] E. Osuna, R. Freund and F. Gros Improved Tranng Agorthm for Support Vector Machnes. To appear n: NNSP 97. [5] A. Smoa and B. Schökopf On a Kerne-based Method for Pattern Recognton, Regresson, Approxmaton and Operator Inverson. GMD Technca Report No (To appear n Agorthmca, speca ssue on Machne Learnng) [6] M. O. Sttson and J. A. E. Weston. Impementatona Issues of Support Vector Machnes. Technca Report CSD-TR-96-8, Computatona Integence Group, Roya Hooway, Unversty of London, 996. [7] M. O. Sttson, J. A. E. Weston, A. Gammerman, V. Vovk and V. Vapnk. Theory of Support Vector Machnes. Technca Report CSD-TR-96-7, Computatona Integence Group, Roya Hooway, Unversty of London, 996. [8] V. Vapnk, S. Goowch and A. Smoa Support Vector Method for Functon Approxmaton, Regresson Estmaton, and Sgna Processng. In: M. Mozer, M. Jordan, and T. Petsche (eds.): Neura Informaton Processng Systems, Vo. 9. MIT Press, Cambrdge, MA, 997. [9] V. Vapnk The Nature of Statstca Learnng Theory. Sprnger-Verag, New York. [20] G. Wahba. Spne Modes for Observatona Data. Seres n Apped Mathematcs, Vo. 59, SIAM, Phadepha, 990. Image Speech and Integent Systems Group

38 Appendx - Impementaton Issues [6] consders chunkng. [4] consders decomposton agorthm wth guaranteed convergence to the goba mnmum. Numerca Consderatons Hessan bady condtoned zero order reguarsaton senstvty of souton to zero order reguarsaton. The support vector agorthms were mpemented n MATLAB. Support Vector Cassfcaton The optmsaton probem can be expressed n matrx notaton as, where wth constrants where mn x H T T 2 α α α + c (60) ( ) T T H = ZZ, c =,, (6) α T Y = 0, α 0, =,,. (62) Z = y x y x y, Y = (63) y The MATLAB mpementaton s gven beow: functon [nsv, apha, b0] = svc(x,y,ker,c) %SVC Support Vector Cassfcaton % % Usage: [nsv apha bas] = svc(x,y,ker,c) % % Parameters: X - Tranng nputs % Y - Tranng targets % ker - kerne functon % C - upper bound (non-separabe case) % nsv - number of support vectors % apha - Lagrange Mutpers % b0 - bas term % % Author: Steve Gunn (srg@ecs.soton.ac.uk) Image Speech and Integent Systems Group

39 f (nargn <2 nargn>4) % check correct number of arguments hep svc ese n = sze(x,); f (nargn<4) C=Inf;, end f (nargn<3) ker= near ;, end epson = e-0; % Construct the H matrx and c vector H = zeros(n,n); for =:n for j=:n H(,j) = Y()*Y(j)*svkerne(ker,X(,:),X(j,:)); end end c = -ones(n,); % Add sma amount of zero order reguarsaton to % avod probems when Hessan s bady condtoned. f (abs(cond(h)) > e+0) fprntf( Hessan bady condtoned, reguarsng...\n ); fprntf( Od condton number: %4.2g\n,cond(H)); H = H *eye(sze(H)); fprntf( New condton number: %4.2g\n,cond(H)); end % Set up the parameters for the Optmsaton probem vb = zeros(n,); % Set the bounds: aphas >= 0 vub = C*ones(n,); % aphas <= C x0 = [ ]; % The startng pont s [ ] neqcstr = nobas(ker); % Set the number of equaty constrants ( or 0) f neqcstr A = Y ;, b = 0; % Set the constrant Ax = b ese A = [];, b = []; end % Sove the Optmsaton Probem st = cputme; f ( vb == zeros(sze(vb)) & mn(vub) == Inf & neqcstr == 0 ) % Separabe probem wth Impct Bas term % Use Non Negatve Least Squares apha = fnns(h,-c); ese % Otherwse % Use Quadratc Programmng apha = qp(h, c, A, b, vb, vub, x0, neqcstr, -); end fprntf( Executon tme: %4.f seconds\n,cputme - st); fprntf( w0 ^2 : %f\n,apha *H*apha); fprntf( Sum apha : %f\n,sum(apha)); % Compute the number of Support Vectors sv = fnd( abs(apha) > epson); nsv = ength(sv); f neqcstr == 0 % Impct bas, b0 b0 = 0; ese % Expct bas, b0; % fnd b0 from par of support vectors, one from each cass cassasv = fnd( abs(apha) > epson & Y == ); cassbsv = fnd( abs(apha) > epson & Y == -); nasv = ength( cassasv ); nbsv = ength( cassbsv ); f ( nasv > 0 & nbsv > 0 ) svpar = [cassasv() cassbsv()]; b0 = -(/2)*sum(Y(svpar) *H(svpar,sv)*apha(sv)); ese b0 = 0; end Image Speech and Integent Systems Group

40 end end Image Speech and Integent Systems Group

41 Support Vector Regresson The optmsaton probem can be expressed n matrx notaton as, where wth constrants where T XX H = XX T mn x 2 XX XX ( ) T T T x Hx T + c x (64) ε Y, c =, ε + Y * α x = α (65) * x,,,,, = 0, α, α 0, =,,. (66) The MATLAB mpementaton s gven beow: functon [nsv, beta, b0] = svr(x,y,ker,e,c) %SVR Support Vector Regresson % % Usage: apha = svr(x,y,ker,e,c) % % Parameters: X - Tranng nputs % Y - Tranng targets % ker - kerne functon % e - nsenstvty % C - upper bound (non-separabe case) % nsv - number of support vectors % beta - Dfference of Lagrange Mutpers % b0 - bas term % % Author: Steve Gunn (srg@ecs.soton.ac.uk) X x y =, Y = (67) x y f (nargn <3 nargn>5) % check correct number of arguments hep svr ese n = sze(x,); f (nargn<5) C=Inf;, end f (nargn<4) e=0.05;, end f (nargn<3) ker= near ;, end epson = e-0; % toerance for Support Vector Detecton % Construct the H matrx and c vector H = zeros(n,n); for =:n for j=:n H(,j) = svkerne(ker,x(,:),x(j,:)); end end Hb = [H -H; -H H]; c = [(e*ones(n,) - Y); (e*ones(n,) + Y)]; % Add sma amount of zero order reguarsaton to Image Speech and Integent Systems Group

42 % avod probems when Hessan s bady condtoned. % Rank s aways ess than or equa to n. % Note that addng to much reg w peturb souton f (abs(cond(hb)) > e+0) fprntf( Hessan bady condtoned, reguarsng...\n ); fprntf( Od condton number: %4.2g\n,cond(Hb)); Hb = Hb *eye(sze(Hb)); fprntf( New condton number: %4.2g\n,cond(Hb)); end % Set up the parameters for the Optmsaton probem vb = zeros(2*n,); % Set the bounds: aphas >= 0 vub = C*ones(2*n,); % aphas <= C x0 = [ ]; % The startng pont s [ ] neqcstr = nobas(ker); % Set the number of equaty constrants ( or 0) f neqcstr A = [ones(,n) -ones(,n)];, b = 0; % Set the constrant Ax = b ese A = [];, b = []; end end % Sove the Optmsaton Probem st = cputme; f ( vb == zeros(sze(vb)) & mn(vub) == Inf & neqcstr == 0 ) % Separabe probem wth Impct Bas term % Use Non Negatve Least Squares apha = fnns(hb,-c); ese % Otherwse % Use Quadratc Programmng apha = qp(hb, c, A, b, vb, vub, x0, neqcstr, -); end fprntf( Executon tme: %4.f seconds\n,cputme - st); fprntf( w0 ^2 : %f\n,apha *Hb*apha); fprntf( Sum apha : %f\n,sum(apha)); % Compute the number of Support Vectors beta = apha(n+:2*n) - apha(:n); sv = fnd( abs(beta) > epson ); nsv = ength( sv ); f neqcstr == 0 % Impct bas, b0 b0 = 0; ese % Expct bas, b0; % compute usng robust method of Smoa % fnd b0 from average of support vectors wth nterpoaton error e svb = fnd( abs(beta) > epson & abs(beta) < C ); nsvb = ength(svb); f nsvb > 0 b0 = (/nsvb)*sum(y(svb) + e*sgn(beta(svb)) + H(svb,sv)*beta(sv)); ese b0 = (max(y)+mn(y))/2; end end Toobox The toobox can be downoaded from Image Speech and Integent Systems Group

The University of Auckland, School of Engineering SCHOOL OF ENGINEERING REPORT 616 SUPPORT VECTOR MACHINES BASICS. written by.

The University of Auckland, School of Engineering SCHOOL OF ENGINEERING REPORT 616 SUPPORT VECTOR MACHINES BASICS. written by. The Unversty of Auckand, Schoo of Engneerng SCHOOL OF ENGINEERING REPORT 66 SUPPORT VECTOR MACHINES BASICS wrtten by Vojsav Kecman Schoo of Engneerng The Unversty of Auckand Apr, 004 Vojsav Kecman Copyrght,

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel

On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel Proceedngs of th European Symposum on Artfca Neura Networks, pp. 25-222, ESANN 2003, Bruges, Begum, 2003 On the Equaty of Kerne AdaTron and Sequenta Mnma Optmzaton n Cassfcaton and Regresson Tasks and

More information

A finite difference method for heat equation in the unbounded domain

A finite difference method for heat equation in the unbounded domain Internatona Conerence on Advanced ectronc Scence and Technoogy (AST 6) A nte derence method or heat equaton n the unbounded doman a Quan Zheng and Xn Zhao Coege o Scence North Chna nversty o Technoogy

More information

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory Advanced Scence and Technoogy Letters Vo.83 (ISA 205), pp.60-65 http://dx.do.org/0.4257/ast.205.83.2 Research on Compex etworks Contro Based on Fuzzy Integra Sdng Theory Dongsheng Yang, Bngqng L, 2, He

More information

Application of support vector machine in health monitoring of plate structures

Application of support vector machine in health monitoring of plate structures Appcaton of support vector machne n heath montorng of pate structures *Satsh Satpa 1), Yogesh Khandare ), Sauvk Banerjee 3) and Anrban Guha 4) 1), ), 4) Department of Mechanca Engneerng, Indan Insttute

More information

Image Classification Using EM And JE algorithms

Image Classification Using EM And JE algorithms Machne earnng project report Fa, 2 Xaojn Sh, jennfer@soe Image Cassfcaton Usng EM And JE agorthms Xaojn Sh Department of Computer Engneerng, Unversty of Caforna, Santa Cruz, CA, 9564 jennfer@soe.ucsc.edu

More information

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06 Dervng the Dua Prof. Bennett Math of Data Scence /3/06 Outne Ntty Grtty for SVM Revew Rdge Regresson LS-SVM=KRR Dua Dervaton Bas Issue Summary Ntty Grtty Need Dua of w, b, z w 2 2 mn st. ( x w ) = C z

More information

Sparse Training Procedure for Kernel Neuron *

Sparse Training Procedure for Kernel Neuron * Sparse ranng Procedure for Kerne Neuron * Janhua XU, Xuegong ZHANG and Yanda LI Schoo of Mathematca and Computer Scence, Nanng Norma Unversty, Nanng 0097, Jangsu Provnce, Chna xuanhua@ema.nnu.edu.cn Department

More information

MARKOV CHAIN AND HIDDEN MARKOV MODEL

MARKOV CHAIN AND HIDDEN MARKOV MODEL MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not

More information

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM

NONLINEAR SYSTEM IDENTIFICATION BASE ON FW-LSSVM Journa of heoretca and Apped Informaton echnoogy th February 3. Vo. 48 No. 5-3 JAI & LLS. A rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 NONLINEAR SYSEM IDENIFICAION BASE ON FW-LSSVM, XIANFANG

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Associative Memories

Associative Memories Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students.

Example: Suppose we want to build a classifier that recognizes WebPages of graduate students. Exampe: Suppose we want to bud a cassfer that recognzes WebPages of graduate students. How can we fnd tranng data? We can browse the web and coect a sampe of WebPages of graduate students of varous unverstes.

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Neural network-based athletics performance prediction optimization model applied research

Neural network-based athletics performance prediction optimization model applied research Avaabe onne www.jocpr.com Journa of Chemca and Pharmaceutca Research, 04, 6(6):8-5 Research Artce ISSN : 0975-784 CODEN(USA) : JCPRC5 Neura networ-based athetcs performance predcton optmzaton mode apped

More information

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION European Journa of Mathematcs and Computer Scence Vo. No. 1, 2017 ON AUTOMATC CONTNUTY OF DERVATONS FOR BANACH ALGEBRAS WTH NVOLUTON Mohamed BELAM & Youssef T DL MATC Laboratory Hassan Unversty MORO CCO

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

The Entire Solution Path for Support Vector Machine in Positive and Unlabeled Classification 1

The Entire Solution Path for Support Vector Machine in Positive and Unlabeled Classification 1 Abstract The Entre Souton Path for Support Vector Machne n Postve and Unabeed Cassfcaton 1 Yao Lmn, Tang Je, and L Juanz Department of Computer Scence, Tsnghua Unversty 1-308, FIT, Tsnghua Unversty, Bejng,

More information

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES

WAVELET-BASED IMAGE COMPRESSION USING SUPPORT VECTOR MACHINE LEARNING AND ENCODING TECHNIQUES WAVELE-BASED IMAGE COMPRESSION USING SUPPOR VECOR MACHINE LEARNING AND ENCODING ECHNIQUES Rakb Ahmed Gppsand Schoo of Computng and Informaton echnoogy Monash Unversty, Gppsand Campus Austraa. Rakb.Ahmed@nfotech.monash.edu.au

More information

Diplomarbeit. Support Vector Machines in der digitalen Mustererkennung

Diplomarbeit. Support Vector Machines in der digitalen Mustererkennung Fachberech Informatk Dpomarbet Support Vector Machnes n der dgtaen Mustererkennung Ausgeführt be der Frma Semens VDO n Regensburg vorgeegt von: Chrstan Mkos St.-Wofgangstrasse 9305 Regensburg Betreuer:

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques

Short-Term Load Forecasting for Electric Power Systems Using the PSO-SVR and FCM Clustering Techniques Energes 20, 4, 73-84; do:0.3390/en40073 Artce OPEN ACCESS energes ISSN 996-073 www.mdp.com/journa/energes Short-Term Load Forecastng for Eectrc Power Systems Usng the PSO-SVR and FCM Custerng Technques

More information

Part II. Support Vector Machines

Part II. Support Vector Machines Part II Support Vector Machnes 35 Chapter 5 Lnear Cassfcaton 5. Lnear Cassfers on Lnear Separabe Data As a frst step n understandng and constructng Support Vector Machnes e stud the case of near separabe

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Active Learning with Support Vector Machines for Tornado Prediction

Active Learning with Support Vector Machines for Tornado Prediction Actve Learnng wth Support Vector Machnes for Tornado Predcton Theodore B. Trafas, Indra Adranto, and Mchae B. Rchman Schoo of Industra Engneerng, Unversty of Okahoma, 0 West Boyd St, Room 4, Norman, OK

More information

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS A DIMESIO-REDUCTIO METHOD FOR STOCHASTIC AALYSIS SECOD-MOMET AALYSIS S. Rahman Department of Mechanca Engneerng and Center for Computer-Aded Desgn The Unversty of Iowa Iowa Cty, IA 52245 June 2003 OUTLIE

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory Proceedngs of the 2009 IEEE Internatona Conference on Systems Man and Cybernetcs San Antono TX USA - October 2009 Mutspectra Remote Sensng Image Cassfcaton Agorthm Based on Rough Set Theory Yng Wang Xaoyun

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

The line method combined with spectral chebyshev for space-time fractional diffusion equation

The line method combined with spectral chebyshev for space-time fractional diffusion equation Apped and Computatona Mathematcs 014; 3(6): 330-336 Pubshed onne December 31, 014 (http://www.scencepubshnggroup.com/j/acm) do: 10.1164/j.acm.0140306.17 ISS: 3-5605 (Prnt); ISS: 3-5613 (Onne) The ne method

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

9 Adaptive Soft K-Nearest-Neighbour Classifiers with Large Margin

9 Adaptive Soft K-Nearest-Neighbour Classifiers with Large Margin 9 Adaptve Soft -Nearest-Neghbour Cassfers wth Large argn Abstract- A nove cassfer s ntroduced to overcome the mtatons of the -NN cassfcaton systems. It estmates the posteror cass probabtes usng a oca Parzen

More information

Nested case-control and case-cohort studies

Nested case-control and case-cohort studies Outne: Nested case-contro and case-cohort studes Ørnuf Borgan Department of Mathematcs Unversty of Oso NORBIS course Unversty of Oso 4-8 December 217 1 Radaton and breast cancer data Nested case contro

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Lower Bounding Procedures for the Single Allocation Hub Location Problem

Lower Bounding Procedures for the Single Allocation Hub Location Problem Lower Boundng Procedures for the Snge Aocaton Hub Locaton Probem Borzou Rostam 1,2 Chrstoph Buchhem 1,4 Fautät für Mathemat, TU Dortmund, Germany J. Faban Meer 1,3 Uwe Causen 1 Insttute of Transport Logstcs,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Support Vector Machine Technique for Wind Speed Prediction

Support Vector Machine Technique for Wind Speed Prediction Internatona Proceedngs of Chemca, Boogca and Envronmenta Engneerng, Vo. 93 (016) DOI: 10.7763/IPCBEE. 016. V93. Support Vector Machne Technque for Wnd Speed Predcton Yusuf S. Turkan 1 and Hacer Yumurtacı

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Numerical integration in more dimensions part 2. Remo Minero

Numerical integration in more dimensions part 2. Remo Minero Numerca ntegraton n more dmensons part Remo Mnero Outne The roe of a mappng functon n mutdmensona ntegraton Gauss approach n more dmensons and quadrature rues Crtca anass of acceptabt of a gven quadrature

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

Advanced Introduction to Machine Learning

Advanced Introduction to Machine Learning Advanced Introducton to Machne Learnng 10715, Fall 2014 The Kernel Trck, Reproducng Kernel Hlbert Space, and the Representer Theorem Erc Xng Lecture 6, September 24, 2014 Readng: Erc Xng @ CMU, 2014 1

More information

Approximate merging of a pair of BeÂzier curves

Approximate merging of a pair of BeÂzier curves COMPUTER-AIDED DESIGN Computer-Aded Desgn 33 (1) 15±136 www.esever.com/ocate/cad Approxmate mergng of a par of BeÂzer curves Sh-Mn Hu a,b, *, Rou-Feng Tong c, Tao Ju a,b, Ja-Guang Sun a,b a Natona CAD

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM Journa of Theoretca and Apped Informaton Technoogy th February 3. Vo. 48 No. 5-3 JATIT & LLS. A rghts reserved. ISSN: 99-8645 www.att.org E-ISSN: 87-395 IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED

More information

Kristin P. Bennett. Rensselaer Polytechnic Institute

Kristin P. Bennett. Rensselaer Polytechnic Institute Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

FMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 5: Support Vector Machines. Cristian Sminchisescu FMA901F: Machne Learnng Lecture 5: Support Vector Machnes Crstan Smnchsescu Back to Bnary Classfcaton Setup We are gven a fnte, possbly nosy, set of tranng data:,, 1,..,. Each nput s pared wth a bnary

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Predicting Model of Traffic Volume Based on Grey-Markov

Predicting Model of Traffic Volume Based on Grey-Markov Vo. No. Modern Apped Scence Predctng Mode of Traffc Voume Based on Grey-Marov Ynpeng Zhang Zhengzhou Muncpa Engneerng Desgn & Research Insttute Zhengzhou 5005 Chna Abstract Grey-marov forecastng mode of

More information

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy

Adaptive and Iterative Least Squares Support Vector Regression Based on Quadratic Renyi Entropy daptve and Iteratve Least Squares Support Vector Regresson Based on Quadratc Ren Entrop Jngqng Jang, Chu Song, Haan Zhao, Chunguo u,3 and Yanchun Lang Coege of Mathematcs and Computer Scence, Inner Mongoa

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Reactive Power Allocation Using Support Vector Machine

Reactive Power Allocation Using Support Vector Machine Reactve Power Aocaton Usng Support Vector Machne M.W. Mustafa, S.N. Khad, A. Kharuddn Facuty of Eectrca Engneerng, Unverst Teknoog Maaysa Johor 830, Maaysa and H. Shareef Facuty of Eectrca Engneerng and

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Polite Water-filling for Weighted Sum-rate Maximization in MIMO B-MAC Networks under. Multiple Linear Constraints

Polite Water-filling for Weighted Sum-rate Maximization in MIMO B-MAC Networks under. Multiple Linear Constraints 2011 IEEE Internatona Symposum on Informaton Theory Proceedngs Pote Water-fng for Weghted Sum-rate Maxmzaton n MIMO B-MAC Networks under Mutpe near Constrants An u 1, Youjan u 2, Vncent K. N. au 3, Hage

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

A General Column Generation Algorithm Applied to System Reliability Optimization Problems

A General Column Generation Algorithm Applied to System Reliability Optimization Problems A Genera Coumn Generaton Agorthm Apped to System Reabty Optmzaton Probems Lea Za, Davd W. Cot, Department of Industra and Systems Engneerng, Rutgers Unversty, Pscataway, J 08854, USA Abstract A genera

More information

International Journal "Information Theories & Applications" Vol.13

International Journal Information Theories & Applications Vol.13 290 Concuson Wthn the framework of the Bayesan earnng theory, we anayze a cassfer generazaton abty for the recognton on fnte set of events. It was shown that the obtane resuts can be appe for cassfcaton

More information

Delay tomography for large scale networks

Delay tomography for large scale networks Deay tomography for arge scae networks MENG-FU SHIH ALFRED O. HERO III Communcatons and Sgna Processng Laboratory Eectrca Engneerng and Computer Scence Department Unversty of Mchgan, 30 Bea. Ave., Ann

More information

On the Power Function of the Likelihood Ratio Test for MANOVA

On the Power Function of the Likelihood Ratio Test for MANOVA Journa of Mutvarate Anayss 8, 416 41 (00) do:10.1006/jmva.001.036 On the Power Functon of the Lkehood Rato Test for MANOVA Dua Kumar Bhaumk Unversty of South Aabama and Unversty of Inos at Chcago and Sanat

More information

Boundary Value Problems. Lecture Objectives. Ch. 27

Boundary Value Problems. Lecture Objectives. Ch. 27 Boundar Vaue Probes Ch. 7 Lecture Obectves o understand the dfference between an nta vaue and boundar vaue ODE o be abe to understand when and how to app the shootng ethod and FD ethod. o understand what

More information

Cyclic Codes BCH Codes

Cyclic Codes BCH Codes Cycc Codes BCH Codes Gaos Feds GF m A Gaos fed of m eements can be obtaned usng the symbos 0,, á, and the eements beng 0,, á, á, á 3 m,... so that fed F* s cosed under mutpcaton wth m eements. The operator

More information

Monica Purcaru and Nicoleta Aldea. Abstract

Monica Purcaru and Nicoleta Aldea. Abstract FILOMAT (Nš) 16 (22), 7 17 GENERAL CONFORMAL ALMOST SYMPLECTIC N-LINEAR CONNECTIONS IN THE BUNDLE OF ACCELERATIONS Monca Purcaru and Ncoeta Adea Abstract The am of ths paper 1 s to fnd the transformaton

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Key words. corner singularities, energy-corrected finite element methods, optimal convergence rates, pollution effect, re-entrant corners

Key words. corner singularities, energy-corrected finite element methods, optimal convergence rates, pollution effect, re-entrant corners NESTED NEWTON STRATEGIES FOR ENERGY-CORRECTED FINITE ELEMENT METHODS U. RÜDE1, C. WALUGA 2, AND B. WOHLMUTH 2 Abstract. Energy-corrected fnte eement methods provde an attractve technque to dea wth eptc

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Gaussian Processes and Polynomial Chaos Expansion for Regression Problem: Linkage via the RKHS and Comparison via the KL Divergence

Gaussian Processes and Polynomial Chaos Expansion for Regression Problem: Linkage via the RKHS and Comparison via the KL Divergence entropy Artce Gaussan Processes and Poynoma Chaos Expanson for Regresson Probem: Lnkage va the RKHS and Comparson va the KL Dvergence Lang Yan * ID, Xaojun Duan, Bowen Lu and Jn Xu Coege of Lbera Arts

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks Shengyang Sun, Changyou Chen, Lawrence Carn Suppementary Matera: Learnng Structured Weght Uncertanty n Bayesan Neura Networks Shengyang Sun Changyou Chen Lawrence Carn Tsnghua Unversty Duke Unversty Duke

More information

Journal of Multivariate Analysis

Journal of Multivariate Analysis Journa of Mutvarate Anayss 3 (04) 74 96 Contents sts avaabe at ScenceDrect Journa of Mutvarate Anayss journa homepage: www.esever.com/ocate/jmva Hgh-dmensona sparse MANOVA T. Tony Ca a, Yn Xa b, a Department

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

Andre Schneider P622

Andre Schneider P622 Andre Schneder P6 Probem Set #0 March, 00 Srednc 7. Suppose that we have a theory wth Negectng the hgher order terms, show that Souton Knowng β(α and γ m (α we can wrte β(α =b α O(α 3 (. γ m (α =c α O(α

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems Mathematca Probems n Engneerng Voume 213 Artce ID 945342 7 pages http://dxdoorg/11155/213/945342 Research Artce H Estmates for Dscrete-Tme Markovan Jump Lnear Systems Marco H Terra 1 Gdson Jesus 2 and

More information

Ths artce was pubshed n an Esever journa. The attached copy s furnshed to the author for non-commerca research and educaton use, ncudng for nstructon at the author s nsttuton, sharng wth coeagues and provdng

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Chapter 6. Rotations and Tensors

Chapter 6. Rotations and Tensors Vector Spaces n Physcs 8/6/5 Chapter 6. Rotatons and ensors here s a speca knd of near transformaton whch s used to transforms coordnates from one set of axes to another set of axes (wth the same orgn).

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

An Augmented Lagrangian Coordination-Decomposition Algorithm for Solving Distributed Non-Convex Programs

An Augmented Lagrangian Coordination-Decomposition Algorithm for Solving Distributed Non-Convex Programs An Augmented Lagrangan Coordnaton-Decomposton Agorthm for Sovng Dstrbuted Non-Convex Programs Jean-Hubert Hours and Con N. Jones Abstract A nove augmented Lagrangan method for sovng non-convex programs

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

A MIN-MAX REGRET ROBUST OPTIMIZATION APPROACH FOR LARGE SCALE FULL FACTORIAL SCENARIO DESIGN OF DATA UNCERTAINTY

A MIN-MAX REGRET ROBUST OPTIMIZATION APPROACH FOR LARGE SCALE FULL FACTORIAL SCENARIO DESIGN OF DATA UNCERTAINTY A MIN-MAX REGRET ROBST OPTIMIZATION APPROACH FOR ARGE SCAE F FACTORIA SCENARIO DESIGN OF DATA NCERTAINTY Travat Assavapokee Department of Industra Engneerng, nversty of Houston, Houston, Texas 7704-4008,

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

A Derivative-Free Algorithm for Bound Constrained Optimization

A Derivative-Free Algorithm for Bound Constrained Optimization Computatona Optmzaton and Appcatons, 21, 119 142, 2002 c 2002 Kuwer Academc Pubshers. Manufactured n The Netherands. A Dervatve-Free Agorthm for Bound Constraned Optmzaton STEFANO LUCIDI ucd@ds.unroma.t

More information