General Averaged Divergence Analysis
|
|
- Shonda McCormick
- 5 years ago
- Views:
Transcription
1 General Averaged Dvergence Analyss Dacheng ao, Xuelong 2, Xndong u 3,, and Stephen J Maybank 2 Departent of Coputng, Hong Kong Polytechnc Unversty, Hong Kong 2 Sch Coputer Scence & Inforaton Systes, Brkbeck, Unversty of ondon, ondon, UK 3 Departent of Coputer Scence, Unversty of Veront, USA csdct@coppolyueduhk; {xuelong, saybank}@dcsbbkacuk; xwu@csuvedu Abstract Subspace selecton s a powerful tool n data nng An portant subspace ethod s the Fsher Rao lnear dscrnant analyss (DA), whch has been successfully appled n any felds such as boetrcs, bonforatcs, and ulteda retreval However, DA has a crtcal drawback: the proecton to a subspace tends to erge those classes that are close together n the orgnal feature space If the separated classes are sapled fro Gaussan dstrbutons, all wth dentcal covarance atrces, then DA axzes the ean value of the Kullback ebler (K) dvergences between the dfferent classes e generalze ths pont of vew to obtan a fraework for choosng a subspace by ) generalzng the K dvergence to the Bregan dvergence and 2) generalzng the arthetc ean to a general ean he fraework s naed the general averaged dvergence analyss (GADA) Under ths GADA fraework, a geoetrc ean dvergence analyss (GMDA) ethod based on the geoetrc ean s studed A large nuber of experents based on synthetc data show that our ethod sgnfcantly outperfors DA and several representatve DA extensons Introducton he Fsher-Rao lnear dscrnant analyss (DA) has a proble n ergng classes that are close together n the orgnal feature space, as shown n Fg hs s referred to as the class separaton proble n ths paper As ponted out by Mcachlan [4], oog et al [], and u et al [3], ths ergng of classes sgnfcantly reduces the recognton rate Fg shows an exaple n whch DA does not select the optal subspace for pattern classfcaton o prove ts perforance, a weghted DA (DA) [8] s ntroduced However, the recognton rate of DA s senstve to the selecton of the weghtng functon oog et al [] developed another weghtng ethod for DA, naely the approxate parwse accuracy crteron (apac) he advantage of apac s that the proecton atrx can be obtaned by the egenvalue decoposton DA Proecton Drecton Fg here are three classes (naed, 2, and 3) of saples, whch are drawn fro a Gaussan dstrbuton n each class DA fnds a proecton drecton, and erges class and class 2 One of the reasonable proecton drectons for classfcaton trades the dstance between class and class 2 off the dstance between the classes, 2 and class 3 In ths paper, to further reduce the class separaton proble, we frst generalze DA to obtan a general averaged dvergence analyss (GADA) If dfferent classes are assued to be sapled fro Gaussan denstes wth dfferent expected values but dentcal covarances, then DA axzes the ean value of the Kullback ebler (K) dvergences [3] between the dfferent pars of denstes Our generalzaton of DA has two aspects: ) the K dvergence s replaced by the Bregan dvergence [2]; and 2) the arthetc ean s replaced by a general ean functon By choosng dfferent optons n ) and 2) a seres of subspace selecton algorths s obtaned, wth DA ncluded as a specal case Under the general averaged dvergence analyss, we nvestgate the effectveness of the geoetrc ean and K dvergence based subspace selecton for solvng the class separaton proble he geoetrc ean aplfes the effects of the sall dvergences and at the sae te reduces the effects of the large dvergences he ethod s naed the geoetrc ean dvergence analyss (GMDA)
2 2 near Dscrnant Analyss he a of DA [8] s to fnd n the feature space a low densonal subspace n whch the dfferent classes of easureents are well separated he subspace s spanned by a set of vectors, w,, whch fors the coluns of a atrx = [ w,, w ] It s assued that a tranng set of easureents s avalable he tranng set s dvded nto c classes he th class contans n easureents x ; ( n ), and has a ean value n = = of ( n ) ; µ x he between class scatter atrx S b and the wthn class scatter atrx S w are defned by where S µ µ µ µ c b = n ( )( ) n = c n w ( ; )( ; ) n = = c n = S = x µ x µ () n= s the sze of the tranng set and c n ( ) = n ; µ x s the ean vector of the total = = tranng set he proecton atrx * of DA s defned by (( w ) b ) * = argaxtr S S (2) he proecton atrx * s coputed fro the egenvectors of S ws b, under the assupton that S w s nvertble If c equals to 2, DA reduces to Fsher dscrnant analyss [9]; otherwse DA s known as Rao dscrnant analyss [6] 3 General Averaged Dvergence Analyss If the dfferent classes are assued to be sapled fro Gaussan denstes wth dfferent expected values but dentcal covarances, then DA axzes the ean value of the K dvergences between the dfferent pars of denstes e propose a fraework, the General Averaged Dvergence Analyss, for choosng a dscrnatve subspace by: ) generalzng the dstorton easure fro the K dvergence to the Bregan dvergence, and 2) generalzng the arthetc ean to a general ean functon 3 Bregan Dvergence [2] Defnton (Bregan Dvergence): et U : S R be a C convex functon defned on a closed convex set S R + he frst dervatve of U s U, whch s a onotone functon he nverse functon of U s ( U ) ( ) ξ = he saple probablty of the th class s p = p y = ξ p between the functon U and the tangent lne to U at ξ p, U ξ p s gven by: x he dfference at ( ( )) s ( ξ, ξ ) ( ) ( ) d p p { U ξ p U ξ p } p{ ξ( p) ξ( p) } = Based on (3), the Bregan dvergence for ( ) (, ) p and (3) D p p = d ξ p ξ p dµ, (4) where dµ s the ebesgue easure he rght-hand sde of (4) s also called the U dvergence [5] Because U s a convex functon, d( ξ( p), ξ ( q) ) s non negatve Consequently, the Bregan dvergence s non negatve Because d( ξ( p), ξ ( q) ) s n general not syetrc, the Bregan dvergence s also not syetrc Detaled nforaton about the Bregan dvergence can be found n [5] If U( x) = exp( x), then the Bregan dvergence reduces to the usual K-dvergence, p D( p q) = p p p log dµ p (5) p = p log dµ = K( p q) p Further exaples can be found n [5] For Gaussan probablty densty functons, p N( xµ ;, Σ), where µ s the ean vector of the th class saples and Σ s the wthn class covarance atrx of the th class, the K dvergence [3] s N ( xµ ;, Σ) K ( p p ) = dxn ( x ; µ, Σ) ln N ( xµ ;, Σ ) (6) = ln Σ ln Σ + tr Σ Σ + tr Σ D, ( ) ( ) where D = ( µ µ ) ( µ µ ) and Σ det ( Σ) o splfy the notaton we denote the K dvergence p x y = and between the proected denstes p ( x y = ) by D ( p p) = D( p( y = ) p( y = ) ) p x x (7) 32 General Averaged Dveregnces Analyss e replace the arthetc ean by the followng general ean,
3 ϕ ( ( )) qq n qq D p p c Vϕ ( ) = ϕ (8) n c where ϕ () s a strct onotonc real-valued ncreasng functon defned on ( 0, + ) ; () functon of ϕ () ; ϕ s the nverse q s the pror probablty of the th class (usually, we can set q = n n or sply q = c); p s the condtonal dstrbuton of the th n class; x R n where R s the feature space contanng the tranng n k saples; and R ( n k ) s the proecton atrx he general averaged dvergence functon easures the average of all dvergences between pars of classes n the subspace e obtan the proecton atrx * by axzng the general averaged dvergence functon V ϕ ( ) over for a fxed ϕ () he general optzaton algorth for subspace selecton based on (8) s gven n able Note that, usually, the concavty of V ϕ ( ) cannot be guaranteed o reduce the effects of local axa [], we choose a nuber of dfferent ntal proecton atrces, perfor optzatons on the, and then select the best one If V ϕ ( ) depends only on the subspace of R n spanned by the coluns of, then can be replaced by M where M s an atrx n whch the coluns of M are orthogonal On settng ϕ ( x) = x, we use the followng arthetc ean based ethod for choosng a subspace, qq D ( p p) * = argax c qqn n c (9) = arg ax qq D p p c Observaton : DA axzes the arthetc ean of the K dvergences between all pars of classes, under the assupton that the Gaussan dstrbutons for the dfferent classes all have the sae covarance atrx he proecton atrx * n DA can be obtaned by axzng a partcular V ϕ ( ) Proof Accordng to (6) and (7), the K dvergence between the th class and the th class n the proected subspace wth the assupton of equal covarance atrces ( Σ = Σ = Σ ) s as follows: D p p = tr Σ D + constant (0) hen, we have able General Averaged Dvergence Maxzaton for Subspace Selecton Input: ranng saples x ;, where denotes the th class ( c ) and s the th saple n the th class ( n ), the denson of selected features k < n (n s the denson of x ; ), and M s the axu nuber of dfferent ntal values for the proecton atrx Output: Optal lnear proecton atrx * for = : M { 2 Randoly ntalze t ( t = ), e, all entres of are rando nubers 3 whle V ( t ) V ( t ) * = argax qq D p p = arg ax c c (( Σ) D ) ( qq tr ) = arg ax tr ( Σ) qq D c c c = arg ax tr ( Σ) qq D = = + c c Because Sb = qq = = + D, as proved by oog [0], and St = Sb + Sw = Σ (see [8]), we have arg ax qq D p p ( ) c (( S w ) S b ) = arg ax tr ϕ ϕ > ε, do{ Conduct the gradent steepest ascent algorth to axze the averaged dvergences defned n (8): 4 t t + κ V ϕ ( t ) where κ s a sall value (eg, 000) 5 t t+ 6 }//whle on lne 3 7 }//for on lne * argax 8 V ϕ ( t ) () It follows fro () that a soluton of DA can be obtaned by the generalzed egenvalue decoposton Exaple: Decell and Mayekar [5] axzed the arthetc ean of all syetrc K dvergences between all pars of classes n the proected subspace he syetrc K dvergences are gven by
4 SK( p p) = K( p p) + K( p p) 2 2 = tr + + tr + ( Σ Σ Σ Σ) ( Σ Σ )( µ µ )( µ µ ) (2) In essence, there s no dfference between [5] and axzng the arthetc ean of all K dvergences De la orre and Kanade [4] developed the orented dscrnant analyss (ODA) based on the sae obectve functon used n [5], but used teratve aorzaton to obtan a soluton Iteratve aorzaton speeds up the tranng stage Furtherore, they generalzed ODA for a ultodal case as the ultodal ODA (MODA) by cobnng t wth Gaussan Mxture Models (GMM) learnt by a noralzed cut for the ultodal case Each class s odelled by a GMM 33 How to Deal wth Multodal Case Up to ths pont t has been assued that the easureent vectors n a gven class are sapled fro a sngle Gaussan dstrbuton hs assupton often fals n large real world data sets, such as those used for ult vew face/gat recognton, natural age classfcaton or texture classfcaton o overcoe ths ltaton, each class can be odeled by a GMM Many ethods for obtanng GMMs are descrbed n the lterature Exaples nclude KMeans [6], GMM wth expectaton axzaton (EM) [6], graph cut [7], and spectru clusterng Unfortunately, these ethods are not adaptve, n that the nuber of subclusters ust be specfed, and soe of the (eg, EM and KMeans) are senstve to the ntal values In our algorth we use a recently ntroduced GMM EM lke algorth proposed by Fgueredo and Jan [7], whch was naed the GMM FJ ethod he reasons for choosng GMM-FJ are as follows: t fnds the nuber of subclusters; t s less senstve to the choce of the ntal values of the paraeters than EM; and t can avod the boundary of the paraeter space e assue that the easureents n each class are sapled fro a GMM and the proecton atrx can be obtaned by axzng the general averaged dvergences, whch easure the averaged dstorton between any par of subclusters n dfferent classes, e, k l k l qq ϕ ( D ( p p) ) c k C l C Vϕ ( ) = ϕ s t, qq n n c s C t C (3) k where q s the pror probablty of the k th subcluster of the th class; p k s the saple probablty of the k th k l D p p s the dvergence subcluster n the th class; ( ) between the k th subcluster n the th class and the l th subcluster n the th class 34 Geoetrc Mean based Subspace Selecton In DA and ODA the arthetc ean of the dvergences s used to fnd a sutable subspace to proect the feature vectors he an beneft of usng the arthetc ean s that the proecton atrx can be obtaned by the generalzed egenvalue decoposton However, DA s not optal for ultclass classfcaton [4] because of the class separaton proble entoned n Secton I herefore, t s useful to nvestgate other choces of ϕ n (8) he log functon s a sutable choce for ϕ because t ncreases the effects of the sall dvergences and at the sae te reduces the effects of the large dvergences On settng ϕ ( x) = log ( x) n (9) the generalzed geoetrc ean of the dvergences s obtaned he requred subspace * s gven by c ( ) qq * = argax D p p qqn (4) n c It follows fro the ean nequalty that the generalzed geoetrc ean s upper bounded by the arthetc ean of the dvergences, e, c ( ) D p p qq qqn n c qq D ( p p) c qq n n c Furtherore, (4) ephaszes the total volue of all dvergences For exaple, n the specal case of q = q for all,, c c c ( ) arg ax D p p ( ) ( ) = arg ax D p p = arg ax D p p 35 K Dvergence qq qqn n c qq In ths paper, we cobne the K dvergence and the geoetrc ean as an exaple for practcal applcatons Replacng D ( p p) wth the K dvergence and optzng the logarth of (4), we have
5 * = argax c ( ) = arg ax log K p p, and K ( p p) (5) s the K dvergence between the th class and the th class n the proected subspace, K ( p p ) = log Σ log Σ 2 + tr Σ Σ (6) (( ) ( )) ( Σ ) D + tr o obtan the optzaton procedure for the geoetrc ean and the K dvergence based subspace selecton algorth based on able, we need the frst order dervatve of ( ), ( ) = K( p p ) K( p p ) (7) and c ( ) ( ) ( ) + ( Σ + D) ( Σ ) Σ ( Σ ) ( Σ D) ( Σ ) K p p = Σ Σ Σ Σ + (8) he ultodal extenson of (5) can be drectly obtaned fro (3) 4 Coparatve Studes Usng Synthetc Data In ths secton, we denote the proposed ethod as the geoetrc ean dvergence analyss (GMDA), and copare GMDA wth DA [8], apac [], DA (slar to apac, but wth a dfferent weghtng functon), HDA [2], ODA [4], and MODA [4] e use the 3 weghtng functon for DA d 4 Heteroscedastc Proble o exane the classfcaton ablty of these subspace selecton ethods for the heteroscedastc proble [2], we generate two classes such that each class has 500 saples, drawn fro a Gaussan dstrbuton he two classes have dentcal ean values but dfferent covarances As shown n Fg 2, DA, apac, and DA separate class eans wthout takng the dfferences between covarances nto account In contrast, HDA, ODA, and GMDA consder both the dfferences between class eans and the dfferences between class covarances, so they have less tranng errors, as shown n Fg 2 42 Multodal Proble In any applcatons t s useful to odel the dstrbuton of a class usng a GMM, because saples n the class ay be drawn fro a ultodal dstrbuton o deonstrate the classfcaton ablty of the ultodal extenson of GMDA, we generate two classes; each class has two subclusters, and saples n each subcluster are drawn fro a Gaussan Fg 3 shows the selected subspaces of dfferent ethods In ths case DA, DA, and apac do not select the sutable subspace for classfcaton However, the ultodal extensons of ODA and GMDA can fnd the sutable subspace Furtherore, although HDA does not take account of ultodal classes, t can select the sutable subspace hs s because n ths case the two classes have slar class eans but sgnfcantly dfferent class covarance atrces when each class s odeled by a sngle Gaussan For coplex cases, eg, when each class conssts of ore than 3 subclusters, HDA wll fal to fnd the optal subspace for classfcaton 43 Class Separaton Proble he ost pronent advantage of GMDA s that t can sgnfcantly reduce the classfcaton errors caused by the too strong effects of the large dvergences between certan classes o deonstrate ths pont, we generate three classes and the saples n each class are drawn fro a Gaussan dstrbuton wo classes are close together and the thrd s far away In Fg 4, t s deonstrated that GMDA shows a good ablty to separate the last two classes of saples However, DA, HDA, and ODA do not gve good results he apca and DA algorths are better than DA but nether of the gves the sutable proecton drecton he results obtaned fro apca are better than those obtaned fro DA, because apac uses a better weghtng strategy than DA 5 Statstcal Experents In ths secton, we utlze a synthetc data odel, whch s a generalzaton of the data generaton odel used by orre and Kanade [4], to evaluate MGMKD n ters of accuracy and robustness he accuracy s easured by the average error rate and the robustness s easured by the standard devaton of the classfcaton error rates In ths data generaton odel, there are fve classes In our experents, for each of the tranng/testng sets, the data generator gves 200 saples for each of the fve classes (therefore,,000 saples n total) Moreover, the saples n each class are obtaned fro a Gaussan Each Gaussan densty s a lnear transforaton of a standard noral dstrbuton he
6 lnear transforatons are defned by x ; = z + µ + n, where x R, z ~ N 0, I R, ; ~,2, R 20 n N 0 I R, denotes the th class, denotes the th saple n ths class, and µ s the ean value of the correspondng noral dstrbuton he µ are assgned as follows: µ = 2N ( 0,) , µ 2 = 020, µ 3 = ( 2N ( 0,) 4 )[ 00, 0], 4 = ( 2N ( 0,) + 4 )[ 0, 0] and 5 = ( 2N ( 0,) + 4 )[ 5, 5, 5, 5] µ 0, µ 0 0 he proecton atrx s a rando atrx Each of ts eleents s Based on ths data generaton sapled fro N ( 0,5) odel, 800 groups (each group wth the tranng and testng saples) of synthetc data are generated fro the odel For coparson, the subspace selecton ethods are frst utlzed to select a gven nuber of features hen the Mahalanobs dstance [6] and the nearest neghbour rule are used to exane the accuracy and robustness of GMDA n coparson wth DA and ts extensons he baselne algorths are DA, apac, DA, HDA, and ODA e conducted the above desgned experents 800 tes based on randoly generated data sets he experental results are reported n ables 2-5 ables 2 and 4 show the average error rates of DA, apac, DA, HDA, ODA, and GMDA based on the Mahalanobs dstance and the nearest neghbour rule, respectvely Heren arthetc ean values are coputed on dfferent feature densons fro to 6 (by colun) Correspondngly, the standard devatons under each condton, whch easure the robustness of the classfers, are gven n ables 3 and 5 e have twenty feature densons for each saple and all the saples are dvded nto fve classes herefore, the axal feature nuber for DA, apac, and DA s 5 =4; n contrast, HDA, ODA, and GMDA can extract ore features than DA, apac, and DA Based on ables 2-5, t can be concluded that GMDA outperfors DA, apac, DA, HDA, and ODA, consstently e now deonstrate why GMDA s a sutable subspace selecton ethod for classfcaton et us frst study the relatonshp between ( ), whch s defned n (5), and the tranng error rate Experents are done on a randoly selected data set fro 800 data sets generated at the begnnng of ths secton e set tranng teratons as 200 In Fg 5, the left shows that the classfcaton error rates decrease wth the ncrease of the tranng teratons and the rght shows that the obectve functon values ( ) ncrease wth the ncrease of the tranng teratons onotoncally herefore, the classfcaton error rates decrease wth the ncrease of ( ) hs eans that axzng wll be useful to acheve a low classfcaton error rate It s also portant to nvestgate how K dvergences between dfferent classes change wth the ncreasng nuber of tranng teratons, because t s helpful to deeply understand how and why GMDA reduces the class separaton proble In Fg 6, we show how K dvergences change n GMDA over the st, 2 nd, 5 th, 0 th, 20 th, and 200 th tranng teratons, respectvely he sall K dvergences, whch are less than 2, are arked wth rectangles here are 5 classes, so we can ap K dvergences to a 5 5 atrx wth zero dagonal values he entry of the th colun and the th row eans the K dvergence between the th class and the th class e denote t as K ( t ), where t eans the t th tranng teraton Because the K dvergence s not syetrc, the atrx s not syetrc, e, K ( t ) K ( t ) Accordng to Fg 6, n the st tranng teraton (the top left 5 5 atrx), there are 8 values less than 2 In the 2 nd teraton (the top rght 5 5 atrx), there are only 4 values less than 2 Copared wth the st teraton, 6 out of 8 have ncreased In the 5 th teraton (the ddle left 5 5 atrx), there are only 2 values less than 2 and they have ncreased n coparson wth the 2 nd teraton However, these two dvergences have decreased to 0439 and 0366 n the 200 th teraton (the botto rght 5 5 atrx) n coparng wth the 20 th teraton (the botto left 5 5 atrx) to guarantee the ncrease of ( ) hs s not sutable to separate classes, because the dvergences between the are very sall 6 Concluson If separate classes are sapled fro Gaussan dstrbutons, all wth dentcal covarance atrces, then the Fsher Rao lnear dscrnant analyss (DA) axzes the ean value of the Kullback ebler (K) dvergences between the dfferent classes e have generalzed ths pont of vew to obtan a fraework for choosng a subspace by ) generalzng the K dvergence to the Bregan dvergence and 2) generalzng the arthetc ean to a general ean he fraework s naed the general averaged dvergence analyss (GADA) Under ths fraework, the geoetrc ean and K dvergence based subspace selecton s then studed DA has a crtcal drawback n that the proecton to a subspace tends to erge those classes that are close together n the orgnal feature space A large nuber of experents based on synthetc data have shown that our ethod sgnfcantly outperfors DA and several representatve DA extensons n overcong ths drawback
7 References [] S Boyd and Vandenberghe, Convex Optzaton, Cabrdge Unversty Press, 2004 [2] M Bregan, he Relaxaton Method to Fnd the Coon Ponts of Convex Sets and Its Applcaton to the Soluton of Probles n Convex Prograng, USSR Copt Math and Math Phys, no 7, pp , 967 [3] M Cover and J A hoas, Eleents of Inforaton heory New York: ley, 99 [4] F De la orre and Kanade Multodal Orented Dscrnant Analyss, Int l Conf Machne earnng, 2005 [5] H P Decell and S M Mayekar, Feature Cobnatons and the Dvergence Crteron, Coputers and Math th Applcatons, vol 3, pp 7 76, 977 [6] RO Duda, PE Hart, and DG Stork, Pattern Classfcaton John ley and Sons Inc 200 [7] M Fgueredo and AK Jan, Unsupervsed learnng of fnte xture odels, IEEE rans Pattern Analyss and Machne Intellgence, vol 24, no 3, pp , 2002 [8] K Fukunaga, Introducton to statstcal pattern recognton (Second Edton) Acadec Press 990 [9] R A Fsher, he Statstcal Utlzaton of Multple Measureents, Ann Eugencs, vol 8 pp , 938 [0] M oog, Approxate Parwse Accuracy Crtera for Multclass near Denson Reducton: Generalzatons of the Fsher Crteron, Delft Unv Press, 999 [] M oog, R P Dun, and R Haeb Ubach, Multclass near Denson Reducton by eghted Parwse Fsher Crtera, IEEE rans Pattern Analyss Machne Intellgence, vol 23, no 7, pp , July 200 [2] M oog and R P Dun, near Densonalty Reducton va a Heteroscedastc Extenson of DA: he Chernoff Crteron, IEEE rans Pattern Analyss Machne Intellgence, vol 26, no 6, pp , June 2004 [3] J u, KN Platanots, and AN Venetsanopoulos, Face Recognton Usng DA Based Algorths, IEEE rans Neural Networks, vol 4, no, pp , 2003 [4] GJ Mcachlan, Dscrnant Analyss and Statstcal Pattern Recognton, ley, New York, 992 [5] N Murata, akenouch, Kanaor, and S Eguch, Inforaton Geoetry of U Boost and Bregan Dvergence, Neural Coputaton, vol 6, no 7, pp,437,48, 2004 [6] C R Rao, he Utlzaton of Multple Measureents n Probles of Bologcal Classfcaton, J Royal Statstcal Soc, B, vol 0, pp , 948 [7] J Sh and J Malk, Noralzed Cuts and Iage Segentaton, IEEE rans Pattern Analyss and Machne Intellgence, vol 22, no 8, pp , Aug 2000 Fg 2 Heteroscedastc proble: n ths fgure, fro left to rght, fro top to botto, there are sx subfgures showng the proecton drectons obtaned usng DA, HDA, apac, DA, ODA, and GMDA he tranng errors of these ethods, as easured by Mahanalobs dstance, are 0340, 02880, 0340, 0340, 02390, and ODA and GMDA fnd the best proecton drecton for classfcaton
8 Fg 3 Multodal proble: n ths fgure, fro left to rght, fro top to botto, there are sx subfgures to descrbe the optal proecton drectons by usng DA, HDA, apac, DA, MODA, and a ultodal extenson of GMDA (M-GMDA) he tranng errors easured by Mahalanobs dstance of these ethods are 0097, 0067, 0097, 0097, 00083, and MODA and M-GMDA fnd the best proecton drecton for classfcaton
9 Fg 4 arge class dvergence proble: n ths fgure, fro left to rght, fro top to botto, there are nne subfgures to descrbe the proecton drectons (ndcated by lnes n each subfgure) by usng DA, HDA, apac, DA, ODA, and GMDA he tranng errors easured by Mahalanobs dstance of these ethods are 0300, 0300, 02900, 03033, 0300, and 067 GMDA fnds the best proecton drecton for classfcaton 05 Classfcaton Error Rate vs ranng Iteratons GMDA 55 Obectve Functon Value vs ranng Iteratons Classfcaton Error Rate Obectve Functon Value ranng Iteratons 20 GMDA ranng Iteratons Fg 5 he consstency of the GMDA obectve functon ( ) and the classfcaton error rate ( ) = ( ) 2 = 3637 ( ) ( ) 5 = = ( ) 20 = ( ) 200 = 5962 Fg 6 he K dvergences n GMDA over st, 2 nd, 5 th, 0 th, 20 th, and 200 th tranng teratons
10 able 2: Average error rates (ean for 800 experents) of DA, apac, DA, HDA, ODA, and GMDA (Mahalanobs dstance) Bass DA apac DA HDA ODA GMDA able 3: Standard devatons of error rates (for 800 experents) of DA, apac, DA, HDA, ODA, and GMDA (Mahalanobs dstance) Bass DA apac DA HDA ODA GMDA able 4: Average error rates (ean for 800 experents) of DA, apac, DA, HDA, ODA, and GMDA (Nearest neghbor rule) Bass DA apac DA HDA ODA GMDA able 5: Standard devatons of error rates (for 800 experents) of DA, apac, DA, HDA, ODA, and GMDA (Nearest neghbor rule) Bass DA apac DA HDA ODA GMDA
Excess Error, Approximation Error, and Estimation Error
E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple
More informationXII.3 The EM (Expectation-Maximization) Algorithm
XII.3 The EM (Expectaton-Maxzaton) Algorth Toshnor Munaata 3/7/06 The EM algorth s a technque to deal wth varous types of ncoplete data or hdden varables. It can be appled to a wde range of learnng probles
More informationSystem in Weibull Distribution
Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co
More informationLeast Squares Fitting of Data
Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2015. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng
More informationApplied Mathematics Letters
Appled Matheatcs Letters 2 (2) 46 5 Contents lsts avalable at ScenceDrect Appled Matheatcs Letters journal hoepage: wwwelseverco/locate/al Calculaton of coeffcents of a cardnal B-splne Gradr V Mlovanovć
More informationy new = M x old Feature Selection: Linear Transformations Constraint Optimization (insertion)
Feature Selecton: Lnear ransforatons new = M x old Constrant Optzaton (nserton) 3 Proble: Gven an objectve functon f(x) to be optzed and let constrants be gven b h k (x)=c k, ovng constants to the left,
More informationLeast Squares Fitting of Data
Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2014. All Rghts Reserved. Created: July 15, 1999 Last Modfed: February 9, 2008 Contents 1 Lnear Fttng
More informationCOS 511: Theoretical Machine Learning
COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that
More informationBAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS. Dariusz Biskup
BAYESIAN CURVE FITTING USING PIECEWISE POLYNOMIALS Darusz Bskup 1. Introducton The paper presents a nonparaetrc procedure for estaton of an unknown functon f n the regresson odel y = f x + ε = N. (1) (
More informationPerceptual Organization (IV)
Perceptual Organzaton IV Introducton to Coputatonal and Bologcal Vson CS 0--56 Coputer Scence Departent BGU Ohad Ben-Shahar Segentaton Segentaton as parttonng Gven: I - a set of age pxels H a regon hoogenety
More informationA KERNEL FUZZY DISCRIMINANT ANALYSIS MINIMUM DISTANCE-BASED APPROACH FOR THE CLASSIFICATION OF FACE IMAGES
Journal of Matheatcal Scences: Advances and Applcatons Volue 29, 204, Pages 75-97 A KERNEL FUZZY DISCRIMINAN ANALYSIS MINIMUM DISANCE-BASED APPROACH FOR HE CLASSIFICAION OF FACE IMAGES College of Coputer
More information1 Definition of Rademacher Complexity
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #9 Scrbe: Josh Chen March 5, 2013 We ve spent the past few classes provng bounds on the generalzaton error of PAClearnng algorths for the
More informationFinite Vector Space Representations Ross Bannister Data Assimilation Research Centre, Reading, UK Last updated: 2nd August 2003
Fnte Vector Space epresentatons oss Bannster Data Asslaton esearch Centre, eadng, UK ast updated: 2nd August 2003 Contents What s a lnear vector space?......... 1 About ths docuent............ 2 1. Orthogonal
More informationComputational and Statistical Learning theory Assignment 4
Coputatonal and Statstcal Learnng theory Assgnent 4 Due: March 2nd Eal solutons to : karthk at ttc dot edu Notatons/Defntons Recall the defnton of saple based Radeacher coplexty : [ ] R S F) := E ɛ {±}
More informationSlobodan Lakić. Communicated by R. Van Keer
Serdca Math. J. 21 (1995), 335-344 AN ITERATIVE METHOD FOR THE MATRIX PRINCIPAL n-th ROOT Slobodan Lakć Councated by R. Van Keer In ths paper we gve an teratve ethod to copute the prncpal n-th root and
More informationLECTURE :FACTOR ANALYSIS
LCUR :FACOR ANALYSIS Rta Osadchy Based on Lecture Notes by A. Ng Motvaton Dstrbuton coes fro MoG Have suffcent aount of data: >>n denson Use M to ft Mture of Gaussans nu. of tranng ponts If
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationPROBABILITY AND STATISTICS Vol. III - Analysis of Variance and Analysis of Covariance - V. Nollau ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE
ANALYSIS OF VARIANCE AND ANALYSIS OF COVARIANCE V. Nollau Insttute of Matheatcal Stochastcs, Techncal Unversty of Dresden, Gerany Keywords: Analyss of varance, least squares ethod, odels wth fxed effects,
More informationOn the Eigenspectrum of the Gram Matrix and the Generalisation Error of Kernel PCA (Shawe-Taylor, et al. 2005) Ameet Talwalkar 02/13/07
On the Egenspectru of the Gra Matr and the Generalsaton Error of Kernel PCA Shawe-aylor, et al. 005 Aeet alwalar 0/3/07 Outlne Bacground Motvaton PCA, MDS Isoap Kernel PCA Generalsaton Error of Kernel
More informationThe Parity of the Number of Irreducible Factors for Some Pentanomials
The Party of the Nuber of Irreducble Factors for Soe Pentanoals Wolfra Koepf 1, Ryul K 1 Departent of Matheatcs Unversty of Kassel, Kassel, F. R. Gerany Faculty of Matheatcs and Mechancs K Il Sung Unversty,
More information1 Review From Last Time
COS 5: Foundatons of Machne Learnng Rob Schapre Lecture #8 Scrbe: Monrul I Sharf Aprl 0, 2003 Revew Fro Last Te Last te, we were talkng about how to odel dstrbutons, and we had ths setup: Gven - exaples
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationWhat is LP? LP is an optimization technique that allocates limited resources among competing activities in the best possible manner.
(C) 998 Gerald B Sheblé, all rghts reserved Lnear Prograng Introducton Contents I. What s LP? II. LP Theor III. The Splex Method IV. Refneents to the Splex Method What s LP? LP s an optzaton technque that
More informationStudy of Classification Methods Based on Three Learning Criteria and Two Basis Functions
Study of Classfcaton Methods Based on hree Learnng Crtera and wo Bass Functons Jae Kyu Suhr Abstract - hs paper nvestgates several classfcaton ethods based on the three learnng crtera and two bass functons.
More informationDesigning Fuzzy Time Series Model Using Generalized Wang s Method and Its application to Forecasting Interest Rate of Bank Indonesia Certificate
The Frst Internatonal Senar on Scence and Technology, Islac Unversty of Indonesa, 4-5 January 009. Desgnng Fuzzy Te Seres odel Usng Generalzed Wang s ethod and Its applcaton to Forecastng Interest Rate
More informationSeveral generation methods of multinomial distributed random number Tian Lei 1, a,linxihe 1,b,Zhigang Zhang 1,c
Internatonal Conference on Appled Scence and Engneerng Innovaton (ASEI 205) Several generaton ethods of ultnoal dstrbuted rando nuber Tan Le, a,lnhe,b,zhgang Zhang,c School of Matheatcs and Physcs, USTB,
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationThree Algorithms for Flexible Flow-shop Scheduling
Aercan Journal of Appled Scences 4 (): 887-895 2007 ISSN 546-9239 2007 Scence Publcatons Three Algorths for Flexble Flow-shop Schedulng Tzung-Pe Hong, 2 Pe-Yng Huang, 3 Gwoboa Horng and 3 Chan-Lon Wang
More informationRegularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationChapter 12 Lyes KADEM [Thermodynamics II] 2007
Chapter 2 Lyes KDEM [Therodynacs II] 2007 Gas Mxtures In ths chapter we wll develop ethods for deternng therodynac propertes of a xture n order to apply the frst law to systes nvolvng xtures. Ths wll be
More informationA Differential Evaluation Markov Chain Monte Carlo algorithm for Bayesian Model Updating M. Sherri a, I. Boulkaibet b, T. Marwala b, M. I.
A Dfferental Evaluaton Markov Chan Monte Carlo algorth for Bayesan Model Updatng M. Sherr a, I. Boulkabet b, T. Marwala b, M. I. Frswell c, a Departent of Mechancal Engneerng Scence, Unversty of Johannesburg,
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationIntroducing Entropy Distributions
Graubner, Schdt & Proske: Proceedngs of the 6 th Internatonal Probablstc Workshop, Darstadt 8 Introducng Entropy Dstrbutons Noel van Erp & Peter van Gelder Structural Hydraulc Engneerng and Probablstc
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationUnified Subspace Analysis for Face Recognition
Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA
More informationGadjah Mada University, Indonesia. Yogyakarta State University, Indonesia Karangmalang Yogyakarta 55281
Reducng Fuzzy Relatons of Fuzzy Te Seres odel Usng QR Factorzaton ethod and Its Applcaton to Forecastng Interest Rate of Bank Indonesa Certfcate Agus aan Abad Subanar Wdodo 3 Sasubar Saleh 4 Ph.D Student
More informationRecap: the SVM problem
Machne Learnng 0-70/5-78 78 Fall 0 Advanced topcs n Ma-Margn Margn Learnng Erc Xng Lecture 0 Noveber 0 Erc Xng @ CMU 006-00 Recap: the SVM proble We solve the follong constraned opt proble: a s.t. J 0
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationAN ANALYSIS OF A FRACTAL KINETICS CURVE OF SAVAGEAU
AN ANALYI OF A FRACTAL KINETIC CURE OF AAGEAU by John Maloney and Jack Hedel Departent of Matheatcs Unversty of Nebraska at Oaha Oaha, Nebraska 688 Eal addresses: aloney@unoaha.edu, jhedel@unoaha.edu Runnng
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationOur focus will be on linear systems. A system is linear if it obeys the principle of superposition and homogenity, i.e.
SSTEM MODELLIN In order to solve a control syste proble, the descrptons of the syste and ts coponents ust be put nto a for sutable for analyss and evaluaton. The followng ethods can be used to odel physcal
More informationMinimization of l 2 -Norm of the KSOR Operator
ournal of Matheatcs and Statstcs 8 (): 6-70, 0 ISSN 59-36 0 Scence Publcatons do:0.38/jssp.0.6.70 Publshed Onlne 8 () 0 (http://www.thescpub.co/jss.toc) Mnzaton of l -Nor of the KSOR Operator Youssef,
More informationFUZZY MODEL FOR FORECASTING INTEREST RATE OF BANK INDONESIA CERTIFICATE
he 3 rd Internatonal Conference on Quanttatve ethods ISBN 979-989 Used n Econoc and Busness. June 6-8, 00 FUZZY ODEL FOR FORECASING INERES RAE OF BANK INDONESIA CERIFICAE Agus aan Abad, Subanar, Wdodo
More informationCentroid Uncertainty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Problems
Centrod Uncertanty Bounds for Interval Type-2 Fuzzy Sets: Forward and Inverse Probles Jerry M. Mendel and Hongwe Wu Sgnal and Iage Processng Insttute Departent of Electrcal Engneerng Unversty of Southern
More informationXiangwen Li. March 8th and March 13th, 2001
CS49I Approxaton Algorths The Vertex-Cover Proble Lecture Notes Xangwen L March 8th and March 3th, 00 Absolute Approxaton Gven an optzaton proble P, an algorth A s an approxaton algorth for P f, for an
More informationarxiv: v2 [math.co] 3 Sep 2017
On the Approxate Asyptotc Statstcal Independence of the Peranents of 0- Matrces arxv:705.0868v2 ath.co 3 Sep 207 Paul Federbush Departent of Matheatcs Unversty of Mchgan Ann Arbor, MI, 4809-043 Septeber
More informationSolving Fuzzy Linear Programming Problem With Fuzzy Relational Equation Constraint
Intern. J. Fuzz Maeatcal Archve Vol., 0, -0 ISSN: 0 (P, 0 0 (onlne Publshed on 0 Septeber 0 www.researchasc.org Internatonal Journal of Solvng Fuzz Lnear Prograng Proble W Fuzz Relatonal Equaton Constrant
More informationModified parallel multisplitting iterative methods for non-hermitian positive definite systems
Adv Coput ath DOI 0.007/s0444-0-9262-8 odfed parallel ultsplttng teratve ethods for non-hertan postve defnte systes Chuan-Long Wang Guo-Yan eng Xue-Rong Yong Receved: Septeber 20 / Accepted: 4 Noveber
More informationhalftoning Journal of Electronic Imaging, vol. 11, no. 4, Oct Je-Ho Lee and Jan P. Allebach
olorant-based drect bnary search» halftonng Journal of Electronc Iagng, vol., no. 4, Oct. 22 Je-Ho Lee and Jan P. Allebach School of Electrcal Engneerng & oputer Scence Kyungpook Natonal Unversty Abstract
More informationDenote the function derivatives f(x) in given points. x a b. Using relationships (1.2), polynomials (1.1) are written in the form
SET OF METHODS FO SOUTION THE AUHY POBEM FO STIFF SYSTEMS OF ODINAY DIFFEENTIA EUATIONS AF atypov and YuV Nulchev Insttute of Theoretcal and Appled Mechancs SB AS 639 Novosbrs ussa Introducton A constructon
More information1. Statement of the problem
Volue 14, 010 15 ON THE ITERATIVE SOUTION OF A SYSTEM OF DISCRETE TIMOSHENKO EQUATIONS Peradze J. and Tsklaur Z. I. Javakhshvl Tbls State Uversty,, Uversty St., Tbls 0186, Georga Georgan Techcal Uversty,
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationLecture 3. Camera Models 2 & Camera Calibration. Professor Silvio Savarese Computational Vision and Geometry Lab. 13- Jan- 15.
Lecture Caera Models Caera Calbraton rofessor Slvo Savarese Coputatonal Vson and Geoetry Lab Slvo Savarese Lecture - - Jan- 5 Lecture Caera Models Caera Calbraton Recap of caera odels Caera calbraton proble
More informationStatistical analysis of Accelerated life testing under Weibull distribution based on fuzzy theory
Statstcal analyss of Accelerated lfe testng under Webull dstrbuton based on fuzzy theory Han Xu, Scence & Technology on Relablty & Envronental Engneerng Laboratory, School of Relablty and Syste Engneerng,
More informationIntegral Transforms and Dual Integral Equations to Solve Heat Equation with Mixed Conditions
Int J Open Probles Copt Math, Vol 7, No 4, Deceber 214 ISSN 1998-6262; Copyrght ICSS Publcaton, 214 www-csrsorg Integral Transfors and Dual Integral Equatons to Solve Heat Equaton wth Mxed Condtons Naser
More informationOn the Construction of Polar Codes
On the Constructon of Polar Codes Ratn Pedarsan School of Coputer and Councaton Systes, Lausanne, Swtzerland. ratn.pedarsan@epfl.ch S. Haed Hassan School of Coputer and Councaton Systes, Lausanne, Swtzerland.
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationInternational Journal of Mathematical Archive-9(3), 2018, Available online through ISSN
Internatonal Journal of Matheatcal Archve-9(3), 208, 20-24 Avalable onlne through www.ja.nfo ISSN 2229 5046 CONSTRUCTION OF BALANCED INCOMPLETE BLOCK DESIGNS T. SHEKAR GOUD, JAGAN MOHAN RAO M AND N.CH.
More informationCHAPTER 6 CONSTRAINED OPTIMIZATION 1: K-T CONDITIONS
Chapter 6: Constraned Optzaton CHAPER 6 CONSRAINED OPIMIZAION : K- CONDIIONS Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based
More informationOn the Construction of Polar Codes
On the Constructon of Polar Codes Ratn Pedarsan School of Coputer and Councaton Systes, Lausanne, Swtzerland. ratn.pedarsan@epfl.ch S. Haed Hassan School of Coputer and Councaton Systes, Lausanne, Swtzerland.
More informationThe Impact of the Earth s Movement through the Space on Measuring the Velocity of Light
Journal of Appled Matheatcs and Physcs, 6, 4, 68-78 Publshed Onlne June 6 n ScRes http://wwwscrporg/journal/jap http://dxdoorg/436/jap646 The Ipact of the Earth s Moeent through the Space on Measurng the
More informationA Knowledge-Based Feature Selection Method for Text Categorization
A Knowledge-Based Feature Selecton Method for Text Categorzaton Yan Xu,2, JnTao L, Bn Wang,ChunMng Sun,2 Insttute of Coputng Technology,Chnese Acadey of Scences No.6 Kexueyuan South Road, Zhongguancun,Hadan
More information1 The Mistake Bound Model
5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there
More informationNEW INTELLIGENT CLASSIFICATION METHOD BASED ON IMPROVED MEB ALGORITHM
INTERNATIONAL JOURNAL ON SMART SENSING AND INTELLIGENT SYSTEMS VOL. 7, NO. 1, MARCH 014 NEW INTELLIGENT CLASSIFICATION METHOD BASED ON IMPROVED MEB ALGORITHM Yongqng Wang, Le Lu Departent of Coputer Scence
More informationMultipoint Analysis for Sibling Pairs. Biostatistics 666 Lecture 18
Multpont Analyss for Sblng ars Bostatstcs 666 Lecture 8 revously Lnkage analyss wth pars of ndvduals Non-paraetrc BS Methods Maxu Lkelhood BD Based Method ossble Trangle Constrant AS Methods Covered So
More informationAn Accurate Measure for Multilayer Perceptron Tolerance to Weight Deviations
Neural Processng Letters 10: 121 130, 1999. 1999 Kluwer Acadec Publshers. Prnted n the Netherlands. 121 An Accurate Measure for Multlayer Perceptron Tolerance to Weght Devatons JOSE L. BERNIER, J. ORTEGA,
More informationCOMP th April, 2007 Clement Pang
COMP 540 12 th Aprl, 2007 Cleent Pang Boostng Cobnng weak classers Fts an Addtve Model Is essentally Forward Stagewse Addtve Modelng wth Exponental Loss Loss Functons Classcaton: Msclasscaton, Exponental,
More informationCHAPTER 7 CONSTRAINED OPTIMIZATION 1: THE KARUSH-KUHN-TUCKER CONDITIONS
CHAPER 7 CONSRAINED OPIMIZAION : HE KARUSH-KUHN-UCKER CONDIIONS 7. Introducton We now begn our dscusson of gradent-based constraned optzaton. Recall that n Chapter 3 we looked at gradent-based unconstraned
More informationA Radon-Nikodym Theorem for Completely Positive Maps
A Radon-Nody Theore for Copletely Postve Maps V P Belavn School of Matheatcal Scences, Unversty of Nottngha, Nottngha NG7 RD E-al: vpb@aths.nott.ac.u and P Staszews Insttute of Physcs, Ncholas Coperncus
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More information1.3 Hence, calculate a formula for the force required to break the bond (i.e. the maximum value of F)
EN40: Dynacs and Vbratons Hoework 4: Work, Energy and Lnear Moentu Due Frday March 6 th School of Engneerng Brown Unversty 1. The Rydberg potental s a sple odel of atoc nteractons. It specfes the potental
More informationGradient Descent Learning and Backpropagation
Artfcal Neural Networks (art 2) Chrstan Jacob Gradent Descent Learnng and Backpropagaton CSC 533 Wnter 200 Learnng by Gradent Descent Defnton of the Learnng roble Let us start wth the sple case of lnear
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationASYMMETRIC TRAFFIC ASSIGNMENT WITH FLOW RESPONSIVE SIGNAL CONTROL IN AN URBAN NETWORK
AYMMETRIC TRAFFIC AIGNMENT WITH FLOW REPONIVE IGNAL CONTROL IN AN URBAN NETWORK Ken'etsu UCHIDA *, e'ch KAGAYA **, Tohru HAGIWARA *** Dept. of Engneerng - Hoado Unversty * E-al: uchda@eng.houda.ac.p **
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationCollaborative Filtering Recommendation Algorithm
Vol.141 (GST 2016), pp.199-203 http://dx.do.org/10.14257/astl.2016.141.43 Collaboratve Flterng Recoendaton Algorth Dong Lang Qongta Teachers College, Haou 570100, Chna, 18689851015@163.co Abstract. Ths
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationSolutions for Homework #9
Solutons for Hoewor #9 PROBEM. (P. 3 on page 379 n the note) Consder a sprng ounted rgd bar of total ass and length, to whch an addtonal ass s luped at the rghtost end. he syste has no dapng. Fnd the natural
More informationWhy Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)
Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch
More informationIdentifying assessor differences in weighting the underlying sensory dimensions EL MOSTAFA QANNARI (1) MICHAEL MEYNERS (2)
Identfyng assessor dfferences n weghtng the underlyng sensory densons EL MOSTAFA QANNARI () MICHAEL MEYNERS (2) () ENITIAA/INRA - Unté de Statstque Applquée à la Caractérsaton des Alents Rue de la Géraudère
More information1 GSW Iterative Techniques for y = Ax
1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn
More informationFace Recognition Using Ada-Boosted Gabor Features
Face Recognton Usng Ada-Boosted Gabor Features Peng Yang, Shguang Shan, Wen Gao, Stan Z. L, Dong Zhang Insttute of Coputng Technology of Chnese Acadey Scence Mcrosoft Research Asa {pyang, sgshan,wgao}@dl.ac.cn,
More informationITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING
ESE 5 ITERATIVE ESTIMATION PROCEDURE FOR GEOSTATISTICAL REGRESSION AND GEOSTATISTICAL KRIGING Gven a geostatstcal regresson odel: k Y () s x () s () s x () s () s, s R wth () unknown () E[ ( s)], s R ()
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationDetermination of the Confidence Level of PSD Estimation with Given D.O.F. Based on WELCH Algorithm
Internatonal Conference on Inforaton Technology and Manageent Innovaton (ICITMI 05) Deternaton of the Confdence Level of PSD Estaton wth Gven D.O.F. Based on WELCH Algorth Xue-wang Zhu, *, S-jan Zhang
More informationFermi-Dirac statistics
UCC/Physcs/MK/EM/October 8, 205 Fer-Drac statstcs Fer-Drac dstrbuton Matter partcles that are eleentary ostly have a type of angular oentu called spn. hese partcles are known to have a agnetc oent whch
More informationAn Optimal Bound for Sum of Square Roots of Special Type of Integers
The Sxth Internatonal Syposu on Operatons Research and Its Applcatons ISORA 06 Xnang, Chna, August 8 12, 2006 Copyrght 2006 ORSC & APORC pp. 206 211 An Optal Bound for Su of Square Roots of Specal Type
More informationQuantum Particle Motion in Physical Space
Adv. Studes Theor. Phys., Vol. 8, 014, no. 1, 7-34 HIKARI Ltd, www.-hkar.co http://dx.do.org/10.1988/astp.014.311136 Quantu Partcle Moton n Physcal Space A. Yu. Saarn Dept. of Physcs, Saara State Techncal
More informationOptimal Control Scheme for Nonlinear Systems with Saturating Actuator Using ε-iterative Adaptive Dynamic Programming
UKACC Internatonal Conference on Control Cardff, UK, 3-5 Septeber Optal Control Schee for Nonlnear Systes wth Saturatng Actuator Usng -Iteratve Adaptve Dynac Prograng Xaofeng Ln, Yuanjun Huang and Nuyun
More informationOn Syndrome Decoding of Punctured Reed-Solomon and Gabidulin Codes 1
Ffteenth Internatonal Workshop on Algebrac and Cobnatoral Codng Theory June 18-24, 2016, Albena, Bulgara pp. 35 40 On Syndroe Decodng of Punctured Reed-Soloon and Gabduln Codes 1 Hannes Bartz hannes.bartz@tu.de
More informationRiccati Equation Solution Method for the Computation of the Solutions of X + A T X 1 A = Q and X A T X 1 A = Q
he Open Appled Inforatcs Journal, 009, 3, -33 Open Access Rccat Euaton Soluton Method for the Coputaton of the Solutons of X A X A Q and X A X A Q Mara Ada *, Ncholas Assas, Grgors zallas and Francesca
More informationMarkov Chain Monte-Carlo (MCMC)
Markov Chan Monte-Carlo (MCMC) What for s t and what does t look lke? A. Favorov, 2003-2017 favorov@sens.org favorov@gal.co Monte Carlo ethod: a fgure square The value s unknown. Let s saple a rando value
More informationSolution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method
Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222
More informationChapter 1. Theory of Gravitation
Chapter 1 Theory of Gravtaton In ths chapter a theory of gravtaton n flat space-te s studed whch was consdered n several artcles by the author. Let us assue a flat space-te etrc. Denote by x the co-ordnates
More information4 Column generation (CG) 4.1 Basics of column generation. 4.2 Applying CG to the Cutting-Stock Problem. Basic Idea of column generation
4 Colun generaton (CG) here are a lot of probles n nteger prograng where even the proble defnton cannot be effcently bounded Specfcally, the nuber of coluns becoes very large herefore, these probles are
More information